944 resultados para Localisation spatiale
Resumo:
The aim of this dissertation is to provide an adequate translation from English into Italian of a section of the European Commission's site, concerning an environmental policy tool whose aim is to reduce the EU greenhouse gas emissions, the Emissions Trading System. The main reason behind this choice was the intention to combine a personal interest in the domain of sustainability development with the desire to delve deeper into the knowledge of the different aspects involved in the localisation process. I also had the possibility to combine these two with my interest in the universe of the European Union. I therefore worked on the particular language of this supranational organisation and for this reason I had the opportunity to experience a very stimulating work placement at the Directorate-General for Translation in Brussels. However, the choice of the text was personal and the translation is not intended for publication. The work is divided into six chapters. In the first chapter the text is contextualised within the framework of the EU, and its legislation on multilingualism. This has consequences on the languages that are used by the drafters of the official documents and on the languages used by translators. The text originates from those documents, but it needs to be adapted to different receivers. The second chapter investigates the process of website localisation. The third chapter offers an analysis of the source text and of the prospective target text. In the fourth chapter the resources created and used for the translation of the text are described. A comparison is made between the resources of the translation service of the European Commission and the ones created specifically for this project: a translation memory, exploited through the use of a CAT tool, and two corpora. The fifth chapter contains the actual translation, side-by-side with the source text, while the sixth one provides a comment on the translation strategies.
Resumo:
Ce mémoire a comme objectif de montrer le processus de localisation en langue italienne d’un site Internet français, celui du Parc de loisir du Lac de Maine. En particulier, le but du mémoire est de démontrer que, lorsqu’on parle de localisation pour le Web, on doit tenir compte de deux facteurs essentiels, qui contribuent de manière exceptionnelle au succès du site sur le Réseau Internet. D’un côté, l’utilisabilité du site Web, dite également ergonomie du Web, qui a pour objectif de rendre les sites Web plus aisés d'utilisation pour l'utilisateur final, de manière que son rapprochement au site soit intuitif et simple. De l’autre côté, l’optimisation pour les moteurs de recherche, couramment appelée « SEO », acronyme de son appellation anglais, qui cherche à découvrir les meilleures techniques visant à optimiser la visibilité d'un site web dans les pages de résultats de recherche. En améliorant le positionnement d'une page web dans les pages de résultats de recherche des moteurs, le site a beaucoup plus de possibilités d’augmenter son trafic et, donc, son succès. Le premier chapitre de ce mémoire introduit la localisation, avec une approche théorique qui en illustre les caractéristiques principales ; il contient aussi des références à la naissance et l’origine de la localisation. On introduit aussi le domaine du site qu’on va localiser, c’est-à-dire le domaine du tourisme, en soulignant l’importance de la langue spéciale du tourisme. Le deuxième chapitre est dédié à l’optimisation pour les moteurs de recherche et à l’ergonomie Web. Enfin, le dernier chapitre est consacré au travail de localisation sur le site du Parc : on analyse le site, ses problèmes d’optimisation et d’ergonomie, et on montre toutes les phases du processus de localisation, y compris l’intégration de plusieurs techniques visant à améliorer la facilité d’emploi par les utilisateurs finaux, ainsi que le positionnement du site dans les pages de résultats des moteurs de recherche.
Resumo:
To assess rotation deficits, asphericity of the femoral head and localisation of cartilage damage in the follow-up after slipped capital femoral epiphysis (SCFE).
Resumo:
L’estimation du stock de carbone contenu dans les forêts peut être effectuée de plusieurs manières. Les méthodes les plus connues sont destructives et nécessitent l’abattage d’un grand nombre représentatif d’arbres. Cette représentativité est difficilement atteinte dans les forêts tropicales, présentant une diversité d’espèces exceptionnelles, comme à Madagascar. Afin d’évaluer le niveau de dégradation des forêts, une étude d'images par télédétection est effectuée au moyen de l’analyse du signal radiométrique, combinée à un inventaire non destructif de biomasse. L’étude de la dynamique du paysage proposé est alors basée sur une correction atmosphérique d’une image SPOT 5, de l’année 2009, et sur une classification semi supervisée de l’occupation des sols, combinant une classification préliminaire non supervisée, un échantillonnage aléatoire des classes et une classification supervisée avec un maximum de vraisemblance. La validation est effectuée à l’aide de points indépendants relevés lors des inventaires de biomasse avec des valeurs du stock de carbone bien précises. La classification non supervisée a permis de ressortir deux classes de forêt dénommées « peu dégradée » et « dégradée ». La première désigne l’état climax (le stock de carbone a atteint une valeur qui varie peu) alors que la seconde est caractérisée par un taux de carbone plus faible que le niveau climax, mais qui peut être atteint sans perturbation. Cette première classification permet alors de répartir les placettes d’inventaire dans chaque classe. La méthode d’inventaire recueille à la fois des données dendrométriques classiques (espèce, densité, hauteur totale, hauteur fût, diamètre) et des échantillons représentatifs de branches et de feuilles sur un arbre. Ces différents paramètres avec la densité de bois permettent d’établir une équation allométrique de laquelle est estimée la biomasse totale d’un arbre et conséquemment de la formation forestière. Par la suite, la classification supervisée a été effectuée à partir d’échantillons aléatoires donnant la valeur de séparabilité des classes, de la classification finale. De plus, les valeurs de stocks de carbone à l’hectare, estimées de chaque placette, ont permis de valider cette classification et d’avoir une évaluation de la précision. La connaissance de ce niveau de dégradation issue de données satellitaires à haute résolution spatiale, combinées à des données d’inventaire, ouvre le champ du suivi interannuel du stock de carbone et subséquemment de la modélisation de la situation future du stock de carbone dans différents types de forêts.
Resumo:
Because of the typical localisation of erosions in anorectic/bulimic patients, the dentist is frequently the first medical person to discern this general illness (anorexia and bulimia nervosa). From the dental viewpoint, the aim should be to preserve sound dental tissue and to prevent further toothwear. A restorative treatment is to be carried out only after causal therapy and after resolving the basic disease. By means of this procedure a good long-term prognosis can be expected. Considering the patient's young age, dentistry should be preservative using the adhesive technique. This case report documents the systematic procedure of the functional and esthetic rehabilitation of an eroded dentition and shows factors essential to the treatment.
Resumo:
La scene politique camerounaise est l'objet de nombre de contradictions, d'absurdites qui font que le motif ''irregularite'' ou celui de ''fraude'' sont toujours presents à l'issue d'echeance electorale. Parmi les causes de ces irregularites figure lecaractere deloyal des scrutins au Cameroun. Qui s'observe dans l'organisation le deroulement et le depouillement des voix. Cependant, dans ce travail, il est plus question de l'occupation spatiale des differents partis à travers les affiches de campagnes et la representation de ces partis dans l'imaginaire collectif des Camerounais. Que ce soit au niveau du dimensionnement des affiches, du temps d'antennes dans les medas chauds aloué à chaque parti, tout se dispose en sorte que le parti au pouvoir demeure.
Resumo:
Limitations associated with the visual information provided to surgeons during laparoscopic surgery increases the difficulty of procedures and thus, reduces clinical indications and increases training time. This work presents a novel augmented reality visualization approach that aims to improve visual data supplied for the targeting of non visible anatomical structures in laparoscopic visceral surgery. The approach aims to facilitate the localisation of hidden structures with minimal damage to surrounding structures and with minimal training requirements. The proposed augmented reality visualization approach incorporates endoscopic images overlaid with virtual 3D models of underlying critical structures in addition to targeting and depth information pertaining to targeted structures. Image overlay was achieved through the implementation of camera calibration techniques and integration of the optically tracked endoscope into an existing image guidance system for liver surgery. The approach was validated in accuracy, clinical integration and targeting experiments. Accuracy of the overlay was found to have a mean value of 3.5 mm ± 1.9 mm and 92.7% of targets within a liver phantom were successfully located laparoscopically by non trained subjects using the approach.
Resumo:
Mr. Kubon's project was inspired by the growing need for an automatic, syntactic analyser (parser) of Czech, which could be used in the syntactic processing of large amounts of texts. Mr. Kubon notes that such a tool would be very useful, especially in the field of corpus linguistics, where creating a large-scale "tree bank" (a collection of syntactic representations of natural language sentences) is a very important step towards the investigation of the properties of a given language. The work involved in syntactically parsing a whole corpus in order to get a representative set of syntactic structures would be almost inconceivable without the help of some kind of robust (semi)automatic parser. The need for the automatic natural language parser to be robust increases with the size of the linguistic data in the corpus or in any other kind of text which is going to be parsed. Practical experience shows that apart from syntactically correct sentences, there are many sentences which contain a "real" grammatical error. These sentences may be corrected in small-scale texts, but not generally in the whole corpus. In order to be able to complete the overall project, it was necessary to address a number of smaller problems. These were; 1. the adaptation of a suitable formalism able to describe the formal grammar of the system; 2. the definition of the structure of the system's dictionary containing all relevant lexico-syntactic information, and the development of a formal grammar able to robustly parse Czech sentences from the test suite; 3. filling the syntactic dictionary with sample data allowing the system to be tested and debugged during its development (about 1000 words); 4. the development of a set of sample sentences containing a reasonable amount of grammatical and ungrammatical phenomena covering some of the most typical syntactic constructions being used in Czech. Number 3, building a formal grammar, was the main task of the project. The grammar is of course far from complete (Mr. Kubon notes that it is debatable whether any formal grammar describing a natural language may ever be complete), but it covers the most frequent syntactic phenomena, allowing for the representation of a syntactic structure of simple clauses and also the structure of certain types of complex sentences. The stress was not so much on building a wide coverage grammar, but on the description and demonstration of a method. This method uses a similar approach as that of grammar-based grammar checking. The problem of reconstructing the "correct" form of the syntactic representation of a sentence is closely related to the problem of localisation and identification of syntactic errors. Without a precise knowledge of the nature and location of syntactic errors it is not possible to build a reliable estimation of a "correct" syntactic tree. The incremental way of building the grammar used in this project is also an important methodological issue. Experience from previous projects showed that building a grammar by creating a huge block of metarules is more complicated than the incremental method, which begins with the metarules covering most common syntactic phenomena first, and adds less important ones later, especially from the point of view of testing and debugging the grammar. The sample of the syntactic dictionary containing lexico-syntactical information (task 4) now has slightly more than 1000 lexical items representing all classes of words. During the creation of the dictionary it turned out that the task of assigning complete and correct lexico-syntactic information to verbs is a very complicated and time-consuming process which would itself be worth a separate project. The final task undertaken in this project was the development of a method allowing effective testing and debugging of the grammar during the process of its development. The problem of the consistency of new and modified rules of the formal grammar with the rules already existing is one of the crucial problems of every project aiming at the development of a large-scale formal grammar of a natural language. This method allows for the detection of any discrepancy or inconsistency of the grammar with respect to a test-bed of sentences containing all syntactic phenomena covered by the grammar. This is not only the first robust parser of Czech, but also one of the first robust parsers of a Slavic language. Since Slavic languages display a wide range of common features, it is reasonable to claim that this system may serve as a pattern for similar systems in other languages. To transfer the system into any other language it is only necessary to revise the grammar and to change the data contained in the dictionary (but not necessarily the structure of primary lexico-syntactic information). The formalism and methods used in this project can be used in other Slavic languages without substantial changes.
Resumo:
OBJECTIVES: To merge clinical information from partly overlapping medical record databases of the Small Animal Teaching Hospital of the Vetsuisse Faculty, University of Berne. To describe the frequencies and localisations of neurological diseases in dogs, as well as their age, gender, breed and geographical distributions. METHODS: In this retrospective study, a new database, with specific variables and a diagnosis key list 'VITAMIN D', was created and defined. A total of 4497 dogs (average of 375 per year) with a well-documented neurological disease were included in the study. A key list for the diagnoses was developed and applied to either the presumptive or the clinical and neurohistopathological diagnosis, with a serial number, a code for localisation and a code for differential diagnoses. RESULTS: Approximately 1159 dogs (26 per cent) had a neurohistopathological diagnosis confirmed, 1431 (32 per cent) had a clinical diagnosis confirmed and 1491 (33 per cent) had a presumptive diagnosis. The most frequent breeds were mixed-breed dogs (577 of 4497, 13 per cent), followed by German shepherd dogs (466 of 4497, 10 per cent). The most common localisations were the forebrain (908 of 4497, 20 per cent) and the spinal cord at the thoracolumbar area (840 of 4497, 19 per cent). Most dogs were diagnosed with degenerative diseases (38 per cent), followed by inflammatory/infectious diseases (14 per cent). The highest number of submissions originated from geographic regions around the referral hospital and from regions with higher human population densities. CLINICAL SIGNIFICANCE: By defining closed-list fields and allocating all data to the corresponding fields, a standardised database that can be used for further studies was generated. The analysis of this study gives examples of the possible uses of a standardised database.
Resumo:
AIM: To assess functional impairment in terms of visual acuity reduction and visual field defects in inactive ocular toxoplasmosis. METHODS: 61 patients with known ocular toxoplasmosis in a quiescent state were included in this prospective, cross-sectional study. A complete ophthalmic examination, retinal photodocumentation and standard automated perimetry (Octopus perimeter, program G2) were performed. Visual acuity was classified on the basis of the World Health Organization definition of visual impairment and blindness: normal (> or =20/25), mild (20/25 to 20/60), moderate (20/60 to 20/400) and severe (<20/400). Visual field damage was correspondingly graded as mild (mean defect <4 dB), moderate (mean defect 4-12 dB) or severe (mean defect >12 dB). RESULTS: 8 (13%) patients presented with bilateral ocular toxoplasmosis. Thus, a total of 69 eyes was evaluated. Visual field damage was encountered in 65 (94%) eyes, whereas only 28 (41%) eyes had reduced visual acuity, showing perimetric findings to be more sensitive in detecting chorioretinal damage (p<0.001). Correlation with the clinical localisation of chorioretinal scars was better for visual field (in 70% of the instances) than for visual acuity (33%). Moderate to severe functional impairment was registered in 65.2% for visual field, and in 27.5% for visual acuity. CONCLUSION: In its quiescent stage, ocular toxoplasmosis was associated with permanent visual field defects in >94% of the eyes studied. Hence, standard automated perimetry may better reflect the functional damage encountered by ocular toxoplasmosis than visual acuity.
Resumo:
AIM: The aim of this study was to obtain information about neurological and cognitive outcome for a population-based group of children after paediatric ischaemic stroke. METHODS: Data from the Swiss neuropaediatric stroke registry (SNPSR), from 1.1.2000 to 1.7.2002, including children (AIS 1) and neonates (AIS 2). At 18-24 months after a stroke, a follow-up examination was performed including a history, neurological and neuropsychological assessment. RESULTS: 33/48 children (22 AIS 1, 11 AIS 2) participated in the study. Neurological outcome was good in 16/33. After childhood stroke mean IQ levels were normal (94), but 6 children had IQ < 85 (50-82) and neuropsychological problems were present in 75%. Performance IQ (93) was reduced compared to verbal IQ (101, p = 0.121) due to problems in the domain of processing speed (89.5); auditory short-term memory was especially affected. Effects on school career were common. Outcome was worse in children after right-sided infarction. Children suffering from stroke in mid-childhood had the best prognosis. There was no clear relationship between outcome and localisation of the lesion. After neonatal stroke 7/11 children showed normal development and epilepsy indicated a worse prognosis in the remaining 4. CONCLUSION: After paediatric stroke neuropsychological problems are present in about 75% of children. Younger age at stroke as well as an emergence of epilepsy were predictors for worse prognosis.
Resumo:
The ATP binding cassette transporter A1 (ABCA1) mediates cellular cholesterol and phospholipid efflux, and is implicated in phosphatidylserine translocation and apoptosis. Loss of functional ABCA1 in null mice results in severe placental malformation. This study aimed to establish the placental localisation of ABCA1 and to investigate whether ABCA1 expression is altered in placentas from pregnancies complicated by pre-eclampsia and antiphospholipid syndrome. ABCA1 mRNA and protein localisation studies were carried out using in situ hybridization and immunohistochemistry. Comparisons of gene expression were performed using real-time PCR and immunoblotting. ABCA1 mRNA and protein was localised to the apical syncytium of placental villi and endothelia of fetal blood vessels within the villi. ABCA1 mRNA expression was reduced in placentas from women with APS when compared to controls (p<0.001), and this was paralleled by reductions in ABCA1 protein expression. There were no differences in ABCA1 expression between placentas from pre-eclamptic pregnancies and controls. The localisation of ABCA1 in human placenta is consistent with a role in cholesterol and phospholipid transport. The decrease in ABCA1 protein in APS may reflect reduced cholesterol transport to the fetus affecting the formation of cell membranes and decreasing the level of substrate available for steroidogenesis.
Resumo:
The potential health effects of inhaled engineered nanoparticles are almost unknown. To avoid and replace toxicity studies with animals, a triple cell co-culture system composed of epithelial cells, macrophages and dendritic cells was established, which simulates the most important barrier functions of the epithelial airway. Using this model, the toxic potential of titanium dioxide was assessed by measuring the production of reactive oxygen species and the release of tumour necrosis factor alpha. The intracellular localisation of titanium dioxide nanoparticles was analyzed by energy filtering transmission electron microscopy. Titanium dioxide nanoparticles were detected as single particles without membranes and in membrane-bound agglomerates. Cells incubated with titanium dioxide particles showed an elevated production of reactive oxygen species but no increase of the release of tumour necrosis factor alpha. Our in vitro model of the epithelial airway barrier offers a valuable tool to study the interaction of particles with lung cells at a nanostructural level and to investigate the toxic potential of nanoparticles.
Resumo:
The following is an analysis of the role of computer aided surgery by infralabyrinthine-subcochlear approach to the petrous apex for cholesterol granulomas with hearing preservation. In a retrospective case review from 1996 to 2008 six patients were analysed in our tertiary referral centre, otorhinolaryngology outpatient clinic. Excellent intraoperative localisation of the carotid artery, facial nerve and the entrance into the cholesterol cyst of the bone by means of the navigation system was seen. Additionally, the operation time decreased from an initial 4 h down to 2 h. The application of computer-aided surgery allows intraoperative monitoring of the position of the tip of the microsurgical instruments in case of a rare disease and in the delicate area of the petrous apex giving a high security level.
Resumo:
OBJECTIVES: To evaluate the feasibility of fusion imaging compound tomography (FICT) of CT/MRI and single photon emission tomography (SPECT) versus planar scintigraphy only (plSc) in pre-surgical staging for vulvar cancer. MATERIALS AND METHODS: Analysis of consecutive patients with vulvar cancer who preoperatively underwent sentinel scintigraphy (planar and 3D-SPECT imaging) and CT or MRI. Body markers were used for exact anatomical co-registration and fusion datasets were reconstructed using SPECT and CT/MRI. The number and localisation of all intraoperatively identified and resected sentinel lymph nodes (SLN) were compared between planar and 3D fusion imaging. RESULTS: Twenty six SLN were localized on planar scintigraphy. Twelve additional SLN were identified after SPECT and CT/MRI reconstruction, all of them were confirmed intraoperatively. In seven cases where single foci were identified at plSc, fusion imaging revealed grouped individual nodes and five additional localisations were discovered at fusion imaging. In seven patients both methods identified SLN contra lateral to the primary tumor site, but only fusion imaging allowed to localise iliac SLN in four patients. All SLN predicted on fusion imaging could be localised and resected during surgery. CONCLUSIONS: Fusion imaging using SPECT and CT/MRI can detect SLN in vulvar cancer more precisely than planar imaging regarding number and anatomical localisation. FICT revealed additional information in seven out of ten cases (70%).