191 resultados para standards Open Geospatial Consortium (OGC)


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Open surgery is still the main treatment of complex abdominal aortic aneurysm. Nevertheless, this approach is associated with major complications and high mortality rate. Therefore the fenestrated endograft has been used to treat the juxtarenal aneurysms. Unfortunately, no randomised controlled study is available to assess the efficacy of such devices. Moreover, the costs are still prohibitive to generalise this approach. Alternative treatments such as chimney or sandwich technique are being evaluated in order to avoid theses disadvantages. The aim of this paper is to present the endovascular approach to treat juxtarenal aneurysm and to emphasize that this option should be used only by highly specialized vascular centres.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Systematic reviews and meta-analyses of pre-clinical studies, in vivo animal experiments in particular, can influence clinical care. Publication bias is one of the major threats of validity in systematic reviews and meta-analyses. Previous empirical studies suggested that systematic reviews and meta-analyses have become more prevalent until 2010 and found evidence for compromised methodological rigor with a trend towards improvement. We aim to comprehensively summarize and update the evidence base on systematic reviews and meta-analyses of animal studies, their methodological quality and assessment of publication bias in particular. METHODS/DESIGN: The objectives of this systematic review are as follows: âeuro¢To investigate the epidemiology of published systematic reviews of animal studies until present. âeuro¢To examine methodological features of systematic reviews and meta-analyses of animal studies with special attention to the assessment of publication bias. âeuro¢To investigate the influence of systematic reviews of animal studies on clinical research by examining citations of the systematic reviews by clinical studies. Eligible studies for this systematic review constitute systematic reviews and meta-analyses that summarize in vivo animal experiments with the purpose of reviewing animal evidence to inform human health. We will exclude genome-wide association studies and animal experiments with the main purpose to learn more about fundamental biology, physical functioning or behavior. In addition to the inclusion of systematic reviews and meta-analyses identified by other empirical studies, we will systematically search Ovid Medline, Embase, ToxNet, and ScienceDirect from 2009 to January 2013 for further eligible studies without language restrictions. Two reviewers working independently will assess titles, abstracts, and full texts for eligibility and extract relevant data from included studies. Data reporting will involve a descriptive summary of meta-analyses and systematic reviews. DISCUSSION: Results are expected to be publicly available later in 2013 and may form the basis for recommendations to improve the quality of systematic reviews and meta-analyses of animal studies and their use with respect to clinical care.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE OF THE STUDY: Fracture of the tibial pilon is a rare injury and its treatment remains difficult. The aim of this study was to report the complications and long term results of internal fixation using a technique which respects soft tissues and in which little material was used. MATERIAL: From 1985 to 1990, 48 patients with 51 fractures of the tibial pilon were treated by open reduction and internal fixation. All patients were submitted to a clinical and radiological review. METHODS: Both the Rüedi/Allgöwer and the AO-classification were used and determined by standard X-rays. Surgical procedure was performed with a 2 or 3 1/3 tube AO-plates and the peroneus was always fixed if fractured. Intraoperative reconstruction was analyzed. Subjective and objective scoring were used according to Olerud and Molander and the ankle arthritis was scored according to the classification determined by the SOFCOT in 1992. RESULTS: A minimal follow-up of 1 year for all cases was obtained, based on our own files. Thirty-eight patients (40 fractures) were evaluated after an average period of 88 months (56 to 124 months). Five patients developed cutaneous infection, three developed deep infection and four developed superficial skin necrosis. One aseptic non-union necessitated reoperation after 14 months. Two ankles had joint fusion after 19 and 25 months respectively due to severe arthritis. In six cases infectious and non-infectious complications led to surgical revision. According to the Olerud and Molander score, 15 per cent of the results were excellent, 45 per cent were good, 30 per cent were fair and 10 per cent poor. DISCUSSION: Literature shows a wide range of results following this surgical procedure. This is due to the difference in the type of trauma, classification system used, material used for the internal fixation and method of evaluation. The classification system of Rüedi and Allgöwer is the most commonly used but has a rather subjective tendency, especially between type II and type III. Treatment is difficult, especially for comminutive fractures associated with soft tissue damage. In this case, open reduction and internal fixation could increase iatrogenic lesions. For this reason surgical procedure can be delayed for several days, little material is used and soft tissue manipulation is reduced to minimum. In other study reports, the use of external fixation with or without minimal internal fixation have produced less complications without improving long term results. CONCLUSION: Analysis and comparison of study reports are difficult because of the absence of consensus in classification system and evaluation methods. The AO-classification, apparently the most objective, will probably be more and more used in the future. Treatment must be adapted to the bony lesion and soft tissue damage. Open reduction and internal fixation must be reserved for a specific group of lesion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The approval in 2004 of bevacizumab (Avastin), a neutralizing monoclonal antibody directed against vascular endothelial growth factor (VEGF) as the first anti-angiogenic systemic drug to treat cancer patients validated the notion introduced 33 years earlier by Dr. Judah Folkman, that inhibition of tumor angiogenesis might be a valid approach to control tumor growth. Anti-angiogenic therapy was greeted in the clinic a major step forward in cancer treatment. At the same time this success recently boosted the field to the quest for new anti-angiogenic targets and drugs. In spite of this success, however, some old questions in the field have remained unanswered and new ones have emerged. They include the identification for surrogate markers of angiogenesis and anti-angiogenesis, the understanding about how anti-angiogenic therapy and chemotherapy synergize, the characterization of the biological consequences of sustained suppression of angiogenesis on tumor biology and normal tissue homeostasis, and the mechanisms of tumor escape from anti-angiogenesis. In this review we summarize some of these outstanding questions, and highlight future challenges in clinical, translational and experimental research in anti-angiogenic therapy that need to be addressed in order to improve current treatments and to design new drugs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent genome-wide association (GWA) studies described 95 loci controlling serum lipid levels. These common variants explain ∼25% of the heritability of the phenotypes. To date, no unbiased screen for gene-environment interactions for circulating lipids has been reported. We screened for variants that modify the relationship between known epidemiological risk factors and circulating lipid levels in a meta-analysis of genome-wide association (GWA) data from 18 population-based cohorts with European ancestry (maximum N = 32,225). We collected 8 further cohorts (N = 17,102) for replication, and rs6448771 on 4p15 demonstrated genome-wide significant interaction with waist-to-hip-ratio (WHR) on total cholesterol (TC) with a combined P-value of 4.79×10(-9). There were two potential candidate genes in the region, PCDH7 and CCKAR, with differential expression levels for rs6448771 genotypes in adipose tissue. The effect of WHR on TC was strongest for individuals carrying two copies of G allele, for whom a one standard deviation (sd) difference in WHR corresponds to 0.19 sd difference in TC concentration, while for A allele homozygous the difference was 0.12 sd. Our findings may open up possibilities for targeted intervention strategies for people characterized by specific genomic profiles. However, more refined measures of both body-fat distribution and metabolic measures are needed to understand how their joint dynamics are modified by the newly found locus.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Chronic kidney disease is associated with cardiovascular disease. We tested for evidence of a shared genetic basis to these traits. STUDY DESIGN: We conducted 2 targeted analyses. First, we examined whether known single-nucleotide polymorphisms (SNPs) underpinning kidney traits were associated with a series of vascular phenotypes. Additionally, we tested whether vascular SNPs were associated with markers of kidney damage. Significance was set to 1.5×10(-4) (0.05/325 tests). SETTING & PARTICIPANTS: Vascular outcomes were analyzed in participants from the AortaGen (20,634), CARDIoGRAM (86,995), CHARGE Eye (15,358), CHARGE IMT (31,181), ICBP (69,395), and NeuroCHARGE (12,385) consortia. Tests for kidney outcomes were conducted in up to 67,093 participants from the CKDGen consortium. PREDICTOR: We used 19 kidney SNPs and 64 vascular SNPs. OUTCOMES & MEASUREMENTS: Vascular outcomes tested were blood pressure, coronary artery disease, carotid intima-media thickness, pulse wave velocity, retinal venular caliber, and brain white matter lesions. Kidney outcomes were estimated glomerular filtration rate and albuminuria. RESULTS: In general, we found that kidney disease variants were not associated with vascular phenotypes (127 of 133 tests were nonsignificant). The one exception was rs653178 near SH2B3 (SH2B adaptor protein 3), which showed direction-consistent association with systolic (P = 9.3 ×10(-10)) and diastolic (P = 1.6 ×10(-14)) blood pressure and coronary artery disease (P = 2.2 ×10(-6)), all previously reported. Similarly, the 64 SNPs associated with vascular phenotypes were not associated with kidney phenotypes (187 of 192 tests were nonsignificant), with the exception of 2 high-correlated SNPs at the SH2B3 locus (P = 1.06 ×10(-07) and P = 7.05 ×10(-08)). LIMITATIONS: The combined effect size of the SNPs for kidney and vascular outcomes may be too low to detect shared genetic associations. CONCLUSIONS: Overall, although we confirmed one locus (SH2B3) as associated with both kidney and cardiovascular disease, our primary findings suggest that there is little overlap between kidney and cardiovascular disease risk variants in the overall population. The reciprocal risks of kidney and cardiovascular disease may not be genetically mediated, but rather a function of the disease milieu itself.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives: To assess the difference in direct medical costs between on-demand (OD) treatment with esomeprazole (E) 20 mg and continuous (C) treatment with E 20 mg q.d. from a clinical practice view in patients with gastroesophageal reflux disease (GERD) symptoms. Methods: This open, randomized study (ONE: on-demand Nexium evaluation) compared two long-term management options with E 20 mg in endoscopically uninvestigated patients seeking primary care for GERD symptoms who demonstrated complete relief of symptoms after an initial treatment of 4 weeks with E 40 mg. Data on consumed quantities of all cost items were collected in the study, while data on prices during the time of study were collected separately. The analysis was done from a societal perspective. Results: Forty-nine percent (484 of 991) of patients randomized to the OD regimen and 46% (420 of 913) of the patients in the C group had at least one contact with the investigator that would have occurred nonprotocol-driven. The difference of the adjusted mean direct medical costs between the treatment groups was CHF 88.72 (95% confidence interval: CHF 41.34-153.95) in favor of the OD treatment strategy (Wilcoxon rank-sum test: P < 0.0001). Adjusted direct nonmedical costs and productivity loss were similar in both groups. Conclusions: The adjusted direct medical costs of a 6-month OD treatment with esomeprazole 20 mg in uninvestigated patients with symptoms of GERD were significantly lower compared with a continuous treatment with E 20 mg once a day. The OD therapy represents a cost-saving alternative to the continuous treatment strategy with E.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The authors pooled data from 15 case-control studies of head and neck cancer (9,107 cases, 14,219 controls) to investigate the independent associations with consumption of beer, wine, and liquor. In particular, they calculated associations with different measures of beverage consumption separately for subjects who drank beer only (858 cases, 986 controls), for liquor-only drinkers (499 cases, 527 controls), and for wine-only drinkers (1,021 cases, 2,460 controls), with alcohol never drinkers (1,124 cases, 3,487 controls) used as a common reference group. The authors observed similar associations with ethanol-standardized consumption frequency for beer-only drinkers (odds ratios (ORs) = 1.6, 1.9, 2.2, and 5.4 for < or =5, 6-15, 16-30, and >30 drinks per week, respectively; P(trend) < 0.0001) and liquor-only drinkers (ORs = 1.6, 1.5, 2.3, and 3.6; P < 0.0001). Among wine-only drinkers, the odds ratios for moderate levels of consumption frequency approached the null, whereas those for higher consumption levels were comparable to those of drinkers of other beverage types (ORs = 1.1, 1.2, 1.9, and 6.3; P < 0.0001). Study findings suggest that the relative risks of head and neck cancer for beer and liquor are comparable. The authors observed weaker associations with moderate wine consumption, although they cannot rule out confounding from diet and other lifestyle factors as an explanation for this finding. Given the presence of heterogeneity in study-specific results, their findings should be interpreted with caution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Head and neck cancer (HNC) risk is elevated among lean people and reduced among overweight or obese people in some studies; however, it is unknown whether these associations differ for certain subgroups or are influenced by residual confounding from the effects of alcohol and tobacco use or by other sources of biases. METHODS: We pooled data from 17 case-control studies including 12 716 cases and the 17 438 controls. Odds ratios (ORs) and 95% confidence intervals (CIs) were estimated for associations between body mass index (BMI) at different ages and HNC risk, adjusted for age, sex, centre, race, education, tobacco smoking and alcohol consumption. RESULTS: Adjusted ORs (95% CIs) were elevated for people with BMI at reference (date of diagnosis for cases and date of selection for controls) 25.0-30.0 kg/m(2) (0.52, 0.44-0.60) and BMI >/=30 kg/m(2) (0.43, 0.33-0.57), compared with BMI >18.5-25.0 kg/m(2). These associations did not differ by age, sex, tumour site or control source. Although the increased risk among people with BMI 25 kg/m(2) was present only in smokers and drinkers. CONCLUSIONS: In our large pooled analysis, leanness was associated with increased HNC risk regardless of smoking and drinking status, although reverse causality cannot be excluded. The reduced risk among overweight or obese people may indicate body size is a modifier of the risk associated with smoking and drinking. Further clarification may be provided by analyses of prospective cohort and mechanistic studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: The outcome of diffuse large B-cell lymphoma has been substantially improved by the addition of the anti-CD20 monoclonal antibody rituximab to chemotherapy regimens. We aimed to assess, in patients aged 18-59 years, the potential survival benefit provided by a dose-intensive immunochemotherapy regimen plus rituximab compared with standard treatment plus rituximab. METHODS: We did an open-label randomised trial comparing dose-intensive rituximab, doxorubicin, cyclophosphamide, vindesine, bleomycin, and prednisone (R-ACVBP) with subsequent consolidation versus standard rituximab, doxorubicin, cyclophosphamide, vincristine, and prednisone (R-CHOP). Random assignment was done with a computer-assisted randomisation-allocation sequence with a block size of four. Patients were aged 18-59 years with untreated diffuse large B-cell lymphoma and an age-adjusted international prognostic index equal to 1. Our primary endpoint was event-free survival. Our analyses of efficacy and safety were of the intention-to-treat population. This study is registered with ClinicalTrials.gov, number NCT00140595. FINDINGS: One patient withdrew consent before treatment and 54 did not complete treatment. After a median follow-up of 44 months, our 3-year estimate of event-free survival was 81% (95% CI 75-86) in the R-ACVBP group and 67% (59-73) in the R-CHOP group (hazard ratio [HR] 0·56, 95% CI 0·38-0·83; p=0·0035). 3-year estimates of progression-free survival (87% [95% CI, 81-91] vs 73% [66-79]; HR 0·48 [0·30-0·76]; p=0·0015) and overall survival (92% [87-95] vs 84% [77-89]; HR 0·44 [0·28-0·81]; p=0·0071) were also increased in the R-ACVBP group. 82 (42%) of 196 patients in the R-ACVBP group experienced a serious adverse event compared with 28 (15%) of 183 in the R-CHOP group. Grade 3-4 haematological toxic effects were more common in the R-ACVBP group, with a higher proportion of patients experiencing a febrile neutropenic episode (38% [75 of 196] vs 9% [16 of 183]). INTERPRETATION: Compared with standard R-CHOP, intensified immunochemotherapy with R-ACVBP significantly improves survival of patients aged 18-59 years with diffuse large B-cell lymphoma with low-intermediate risk according to the International Prognostic Index. Haematological toxic effects of the intensive regimen were raised but manageable. FUNDING: Groupe d'Etudes des Lymphomes de l'Adulte and Amgen.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background The association between dietary patterns and head and neck cancer has rarely been addressed. Patients and methods We used individual-level pooled data from five case-control studies (2452 cases and 5013 controls) participating in the International Head and Neck Cancer Epidemiology consortium. A posteriori dietary patterns were identified through a principal component factor analysis carried out on 24 nutrients derived from study-specific food-frequency questionnaires. Odds ratios (ORs) and corresponding 95% confidence intervals (CIs) were estimated using unconditional logistic regression models on quintiles of factor scores. Results We identified three major dietary patterns named 'animal products and cereals', 'antioxidant vitamins and fiber', and 'fats'. The 'antioxidant vitamins and fiber' pattern was inversely related to oral and pharyngeal cancer (OR = 0.57, 95% CI 0.43-0.76 for the highest versus the lowest score quintile). The 'animal products and cereals' pattern was positively associated with laryngeal cancer (OR = 1.54, 95% CI 1.12-2.11), whereas the 'fats' pattern was inversely associated with oral and pharyngeal cancer (OR = 0.78, 95% CI 0.63-0.97) and positively associated with laryngeal cancer (OR = 1.69, 95% CI 1.22-2.34). Conclusions These findings suggest that diets rich in animal products, cereals, and fats are positively related to laryngeal cancer, and those rich in fruit and vegetables inversely related to oral and pharyngeal cancer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Résumé Cet article examine le rôle joué par les normes internationales techniques dans la mondialisation des activités de service. Différentes approches d'économie considèrent que les spécificités des activités de services sont un frein à leur délocalisation, à leur industrialisation et à leur normalisation. A l'opposé de ces approches centrées sur les spécificités des activités de services, les approches d'économie politique internationale mettent en avant l'existence de configurations conflictuelles de pouvoir à l'oeuvre dans l'internationalisation des activités de services et ce, au-delà des limites sectorielles et nationales. Cet article examine le cas du secteur des centres d'appels et, plus généralement, celui de la sous-traitance des services aux entreprises (BPO) en Inde. Nos résultats suggèrent que les normes techniques sont importantes dans le secteur étudié, alors même que ces types de services sont conventionnellement identifiés comme étant peu susceptibles d'être soumis à des normes. Une perspective d'économie politique sur la normalisation des activités de service souligne comment la problématique du pouvoir investit la normalisation technique d'une dimension plus progressive à travers les thématiques du "travailleur", du "consommateur", ou de "l'environnement". Abstract This paper explores the role of international standards in the much-debated globalisation of the service economy. Various strands of economic analyses consider that core attributes of services affect their ability to be reliably delocalised, industrialised, and standardised. In contrast, international political economy approaches draw attention to power configurations supporting conflicting use of standards across industries and nations. The paper examines the case of the rising Indian service industry in customer centres and business process outsourcing to probe these opposing views. Our findings suggest that standards matter in types of services that conventional economic analyses identify as unlikely to be standardised, and that the standards used in the Indian BPO industry are widely accepted. Despite little conflict in actual definitions of market requirements, an international political economy perspective on service standardisation highlights the importance of potential power issues related to workers', consumers', and environmental concerns likely to be included in more progressive forms of standardisation.