875 resultados para Multilevel thresholding


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Unlike infections occurring during periods of chemotherapy-induced neutropenia, postoperative infections in patients with solid malignancy remain largely understudied. The purpose of this population-based study was to evaluate the clinical and economic burden, as well as the relationship of hospital surgical volume and outcomes associated with serious postoperative infection (SPI) – i.e., bacteremia/sepsis, pneumonia, and wound infection – following resection of common solid tumors.^ From the Texas Discharge Data Research File, we identified all Texas residents who underwent resection of cancer of the lung, esophagus, stomach, pancreas, colon, or rectum between 2002 and 2006. From their billing records, we identified ICD-9 codes indicating SPI and also subsequent SPI-related readmissions occurring within 30 days of surgery. Random-effects logistic regression was used to calculate the impact of SPI on mortality, as well as the association between surgical volume and SPI, adjusting for case-mix, hospital characteristics, and clustering of multiple surgical admissions within the same patient and patients within the same hospital. Excess bed days and costs were calculated by subtracting values for patients without infections from those with infections computed using multilevel mixed-effects generalized linear model by fitting a gamma distribution to the data using log link.^ Serious postoperative infection occurred following 9.4% of the 37,582 eligible tumor resections and was independently associated with an 11-fold increase in the odds of in-hospital mortality (95% Confidence Interval [95% CI], 6.7-18.5, P < 0.001). Patients with SPI required 6.3 additional hospital days (95% CI, 6.1 - 6.5) at an incremental cost of $16,396 (95% CI, $15,927–$16,875). There was a significant trend toward lower overall rates of SPI with higher surgical volume (P=0.037). ^ Due to the substantial morbidity, mortality, and excess costs associated with SPI following solid tumor resections and given that, under current reimbursement practices, most of this heavy burden is borne by acute care providers, it is imperative for hospitals to identify more effective prophylactic measures, so that these potentially preventable infections and their associated expenditures can be averted. Additional volume-outcomes research is also needed to identify infection prevention processes that can be transferred from higher- to lower-volume providers.^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective::Describe and understand regional differences and associated multilevel factors (patient, provider and regional) to inappropriate utilization of advance imaging tests in the privately insured population of Texas. Methods: We analyzed Blue Cross Blue Shield of Texas claims dataset to study the advance imaging utilization during 2008-2010 in the PPO/PPO+ plans. We used three of CMS "Hospital Outpatient Quality Reporting" imaging efficiency measures. These included ordering MRI for low back pain without prior conservative management (OP-8) and utilization of combined with and without contrast abdominal CT (OP-10) and thorax CT (OP-11). Means and variation by hospital referral regions (HRR) in Texas were measured and a multilevel logistic regression for being a provider with high values for any the three OP measures was used in the analysis. We also analyzed OP-8 at the individual level. A multilevel logistic regression was used to identify predictive factors for having an inappropriate MRI for low back pain. Results: Mean OP-8 for Texas providers was 37.89%, OP-10 was 29.94% and OP-11 was 9.24%. Variation was higher for CT measure. And certain HRRs were consistently above the mean. Hospital providers had higher odds of high OP-8 values (OP-8: OR, 1.34; CI, 1.12-1.60) but had smaller odds of having high OP-10 and OP-11 values (OP-10: OR, 0.15; CI, 0.12-0.18; OP-11: OR, 0.43; CI, 0.34-0.53). Providers with the highest volume of imaging studies performed, were less likely to have high OP-8 measures (OP-8: OR, 0.58; CI, 0.48-0.70) but more likely to perform combined thoracic CT scans (OP-11: OR, 1.62; CI, 1.34-1.95). Males had higher odds of inappropriate MRI (OR, 1.21; CI, 1.16-1.26). Pattern of care in the six months prior to the MRI event was significantly associated with having an inappropriate MRI. Conclusion::We identified a significant variation in advance imaging utilization across Texas. Type of facility was associated with measure performance, but the associations differ according to the type of study. Last, certain individual characteristics such as gender, age and pattern of care were found to be predictors of inappropriate MRIs.^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thawing-induced cliff top retreat in permafrost landscapes is mainly due to thermo-erosion. Ground-ice-rich permafrost landscapes are specifically vulnerable to thermo-erosion and may show high degradation rates. Within the HGF Alliance Remote Sensing and the FP7 PAGE21 permafrost programs we investigated how SAR and optical remote sensing can contribute to the monitoring of erosion rates of ice-rich cliffs in Arctic Siberia (Lena Delta, Russia). We produced two different vector products: i) Intra-annual cliff top retreat based on TerraSAR-X (TSX) satellite data (2012-2014): High-temporal resolution time series of TSX satellite data allow the inter-annual and intra-annual monitoring of the upper cliff-line retreat also under bad weather conditions and continuous cloud coverage. This published SAR product contains the retreating upper cliff lines of a 1.5 km long part of eroding ice-rich coast of Kurungnakh Island in the central Lena Delta. The upper cliff line was mapped using a thresholding approach for images acquired in the years 2012, 2013 and 2014 for the months June (2013, 2014), July (2013, 2014), August (2012, 2013, 2014) and September (2013, 2014). The cliff top retreat vector product is called 'upper_cliff_TerraSAR-X'. While the 2014 cliff lines show a clear retreat of 2 to 3 m/month, the cliff top lines for 2012 and 2013 are not chronologically ordered. However, lines from the end of the season of a year are always close to the lines from the beginning of the next summer season, indicating low cliff retreat in winter. ii) 4-year cliff top retreat based on optical satellite data (2010-2014): Long-term cliff top retreat could be assessed with two high-spatial resolution optical satellite images (GeoEye-1, 2010-08-05 and Worldview-1, 2014-08-19). The cliff top retreat vector product is called 'upper_cliff_optical'. Results: The long-term cliff top retreat derived from optical satellite data are 35 m cliff retreat within 4 years. The higher-temporal resolution SAR data equivalently show long-term rates of 18 m within 2 years and nearly now degradation activities in winter but maximum erosion rates in summer months.The Intra-seasonal cliff top retreat lines from 2014 show a rate of 2 to 3 m per month.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper assesses the impact of climate change on China's agricultural production at a cross-provincial level using the Ricardian approach, incorporating a multilevel model with farm-level group data. The farm-level group data includes 13379 farm households, across 316 villages, distributed in 31 provinces. The empirical results show that, firstly, the marginal effects and elasticities of net crop revenue per hectare with respect to climate factors indicated that the annual impact of temperature on net crop revenue per hectare was positive, and the effect of increased precipitation was negative when looking at the national totals; secondly, the total impact of simulated climate change scenarios on net crop revenues per hectare at a Chinese national total level, was an increase of between 79 USD per hectare and 207 USD per hectare for the 2050s, and an increase from 140 USD per hectare to 355 USD per hectare for the 2080s. As a result, climate change may create a potential advantage for the development of Chinese agriculture, rather than a risk, especially for agriculture in the provinces of the Northeast, Northwest and North regions. However, the increased precipitation can lead to a loss of net crop revenue per hectare, especially for the provinces of the Southwest, Northwest, North and Northeast regions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En esta tesis se trabaja sobre la hipótesis de que el componente didáctico del discurso divulgativo queda delimitado por estrategias discursivas originadas en el tratamiento modal y actualizadas en los niveles funcional, situacional, semántico y formal-gramatical. El objetivo es caracterizar tales estrategias para identificar tendencias en la realización lingüísticodiscursiva del componente didáctico. El corpus se ha formado teniendo en cuenta soporte (web), formato (hipertexto) y dominio disciplinar (Análisis Sensorial de Vinos). La metodología es, fundamentalmente, cualitativo-ejemplar, basada en el modelo multinivel propuesto por Ciapuscio (2003) para el análisis de textos especializados. Los resultados sugieren que en el nivel funcional, el componente didáctico se distingue por el predominio de los términos positivos de las categorías modales epistémica (función informar) y ética (función dirigir); en el nivel situacional, por tres tipos de construcciones discursivas: la del enunciador experto, la del enunciatario lego y la de la pertenencia del lego a la comunidad especializada; en el nivel semántico, por la estandarización de partes textuales y por el predominio tanto de axiologización eufórica ética y cognoscitiva, como de secuencias expositivas y de procedimientos explicativos causales, descriptivos e ilustrativos; en el nivel formal, por recursos paratextuales e hipertextuales que refuerzan la actualización del componente didáctico.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hypabyssal rocks of the Omgon Range, Western Kamchatka that intrude Upper Albian-Lower Campanian deposits of the Eurasian continental margin belong to three coeval (62.5-63.0 Ma) associations: (1) ilmenite gabbro-dolerites, (2) titanomagnetite gabbro-dolerites and quartz microdiorites, and (3) porphyritic biotite granites and granite-aplites. Early Paleocene age of ilmenite gabbro-dolerites and biotite granites was confirmed by zircon and apatite fission-track dating. Ilmenite and titanomagnetite gabbro-dolerites were produced by multilevel fractional crystallization of basaltic melts with, respectively, moderate and high Fe-Ti contents and contamination of these melts with rhyolitic melts of different compositions. Moderate- and high-Fe-Ti basaltic melts were derived from mantle spinel peridotite variably depleted and metasomatized by slab-derived fluid prior to melting. The melts were generated at variable depths and different degrees of melting. Biotite granites and granite aplites were produced by combined fractional crystallization of a crustal rhyolitic melt and its contamination with terrigenous rocks of the Omgon Group. The rhyolitic melts were likely derived from metabasaltic rocks of suprasubduction nature. Early Paleocene hypabyssal rocks of the Omgon Range were demonstrated to have been formed in an extensional environment, which dominated in the margin of the Eurasian continent from Late Cretaceous throughout Early Paleocene. Extension in the Western Kamchatka segment preceded the origin of the Western Koryakian-Kamchatka (Kinkil') continental-margin volcanic belt in Eocene time. This research was conducted based on original geological, mineralogical, geochemical, and isotopic (Rb-Sr) data obtained by the authors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

AIMS Polypharmacy is associated with adverse events and multimorbidity, but data are limited on its association with specific comorbidities in primary care settings. We measured the prevalence of polypharmacy and inappropriate prescribing, and assessed the association of polypharmacy with specific comorbidities. METHODS We did a cross-sectional analysis of 1002 patients aged 50-80years followed in Swiss university primary care settings. We defined polypharmacy as ≥5 long-term prescribed drugs and multimorbidity as ≥2 comorbidities. We used logistic mixed-effects regression to assess the association of polypharmacy with the number of comorbidities, multimorbidity, specific sets of comorbidities, potentially inappropriate prescribing (PIP) and potential prescribing omission (PPO). We used multilevel mixed-effects Poisson regression to assess the association of the number of drugs with the same parameters. RESULTS Patients (mean age 63.5years, 67.5% ≥2 comorbidities, 37.0% ≥5 drugs) had a mean of 3.9 (range 0-17) drugs. Age, BMI, multimorbidity, hypertension, diabetes mellitus, chronic kidney disease, and cardiovascular diseases were independently associated with polypharmacy. The association was particularly strong for hypertension (OR 8.49, 95%CI 5.25-13.73), multimorbidity (OR 6.14, 95%CI 4.16-9.08), and oldest age (75-80years: OR 4.73, 95%CI 2.46-9.10 vs.50-54years). The prevalence of PPO was 32.2% and PIP was more frequent among participants with polypharmacy (9.3% vs. 3.2%, p<0.006). CONCLUSIONS Polypharmacy is common in university primary care settings, is strongly associated with hypertension, diabetes mellitus, chronic kidney disease and cardiovascular diseases, and increases potentially inappropriate prescribing. Multimorbid patients should be included in further trials for developing adapted guidelines and avoiding inappropriate prescribing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Se evalúa con indicadores de gobernanza urbana la sostenibilidad de las formas de hacer ciudad hibrida compleja del gobierno de la gestión visible (GGV). Argumenta que el GGV hace ciudad para legitimarse por desempeño y fortalecer la gobernanza local, en un contexto de mutaciones múltiples y radicales que tienden a diluir y centralizar el poder local y fractalizar la ciudad, profundizando la segregación sociopolítica-territorial y la ingobernabilidad genética de la ciudad hibrida, poniendo en riesgo el Estado federal descentralizado, el derecho a la ciudad, al gobierno local y la gobernanza urbana y multinivel (hipótesis). La estrategia de evaluación de gobernanza innovadora (EEG+i) diseñada para evaluar la relación entre las formas de hacer ciudad hibrida (variables espaciales) y gobernanza (variable a-espacial) es transversal, multidimensional y se construye desde la complejidad, el análisis de escenarios, formulación de constructos, modelos e indicadores de gobernanza, entretejiendo tres campos de conocimiento, gobierno, ciudad y sostenibilidad, en cuatro fases. La Fase 1, contextualiza la gobernanza en la dramática del siglo XXI. La Fase 2, desarrolla la fundamentación teórico-práctica, nuevos conceptos y un abordaje analítico propio ‘genética territorial’, para analizar y comprehender la complejidad de la ciudad hibrida de países en desarrollo, tejiendo ontogenética territorial y el carácter autopoiético del gen informal. En la Fase 3, se caracterizan las formas de hacer ciudad desde la genética del territorio, se formulan modelos e indicadores de gobernanza con los que se evalúan, aplicando un delphi y cuestionarios, los genes tipológicos-formas de hacer ciudad y validan las conclusiones. En la Fase 4, se correlacionan los resultados de los instrumentos aplicados con la praxis urbana del GGV, durante cuatro periodos de gobierno (1996-2010). Concluyendo que, la estrategia de evaluación comprobó las hipótesis y demostró la correlación transversal y multinivel existente entre, las mutaciones en curso que contradicen el modelo de gobernanza constitucional, el paisaje de gobernanza latinoamericano y venezolano, la praxis de los regímenes híbridos ricos en recursos naturales, las perspectivas de desarrollo globales y se expresa sociopolíticamente en déficit de gobernanza, Estado de derecho y cohesión-capital social y, espaciolocalmente, en la ciudad hibrida dispersa y diluida (compleja) y en el gobierno del poder diluido centralizado. La confrontación de flujos de poder centrípetos y centrífugos en la ciudad profundiza la fragmentación socioespacial y política y el deterioro de la calidad de vida, incrementando las protestas ciudadanas e ingobernabilidad que obstaculiza la superación de la pobreza y gobernanza urbana y multinivel. La evaluación de la praxis urbana del GGV evidenció que la correlación entre gobernanza, la producción de genes formales y la ciudad por iniciativa privada tiende a ser positiva y entre gobernanza, genes y producción de ciudad informal negativa, por el carácter autopoiético-autogobernable del gen informal y de los nuevos gobiernos sublocales que dificulta gobernar en gobernanza. La praxis del GGV es contraria al modelo de gobernanza formulado y la disolución centralizada del gobierno local y de la ciudad hibrida-dispersa es socio-espacial y políticamente insostenible. Se proponen estrategias y tácticas de gobernanza multinivel para recuperar la cohesión social y de planificación de la gestión innovadora (EG [PG] +i) para orquestar, desde el Consejo Local de Gobernanza (CLG) y con la participación de los espacios y gobiernos sublocales, un proyecto de ciudad compartido y sostenible. ABSTRACT The sustainability of the forms of making the hybrid-complex city by the visible management government (VMG) is evaluated using urban governance indicators. Argues that the VMG builds city to legitimate itself by performance and to strengthen local governance in a context of multiple and radical mutations that tend to dilute and centralize local power and fractalize the city, deepening the socio-spatial and political segregation, the genetic ingovernability of the hybrid city and placing the decentralized federal State, the right to city, local government and urban governance at risk (hypothesis). The innovative governance evaluation strategy (GES+i) designed to assess the relationship between the forms of making the hybrid city (spatial variables) and governance (a-spatial variable) is transversal, multidimensional; is constructed from complexity, scenario analysis, the formulation of concepts, models and governance indicators, weaving three fields of knowledge, government, city and sustainability in four phases. Phase 1, contextualizes governance in the dramatic of the twenty-first century. Phase 2, develops the theoretical and practical foundations, new concepts and a proper analytical approach to comprehend the complexity of the hybrid city from developing countries, weaving territorial ontogenetic with the autopiethic character of the informal city gen. In Phase 3, the ways of making city are characterized from the genetics of territory; governance indicators and models are formulated to evaluate, using delphi and questionnaires, the ways of making city and validate the conclusions. In Phase 4, the results of the instruments applied are correlated with the urban praxis of the VMG during the four periods of government analyzed (1996-2010). Concluding that, the evaluation strategy proved the hypothesis and showed the transversal and multilevel correlation between, mutations that contradict the constitutional governance model, the governance landscape of Latinamerica and the country, the praxis of the hybrid regimes rich in natural resources, the perspectives of the glocal economy and expresses socio-politically the governance and rule of law and social capital-cohesion deficit and spatial-temporarily the hybrid disperse and diluted city (complex) and the diluted-centralized local government. The confrontation of flows of power centripetal and centrifugal in the city deepens the socio-spatial and political fragmentation and deterioration of the quality of life, increasing citizens' protests and ingovernability which hinders poverty eradication and, multilevel and urban governance. The evaluation of the VMG urban praxis showed the correlation between governance, the production of formal genes and city by private initiative tended to be positive and, between informal genes-city production and governance negative, due to its autopiethic-self governable character that hinders governance. The urban praxis of the VMG contradicts the formulated governance model and thecentralized dissolution of the local government and hybrid city are socio-spatial and politically unsustainable. Multiscale governance strategies are proposed to recreate social cohesion and a management planning innovative method (EG [PG] + i) to orchestrate, from the Local Governance Council (LGC) and with the participation of sublocal governments and spaces, a shared and sustainable city project.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The challenges regarding seamless integration of distributed, heterogeneous and multilevel data arising in the context of contemporary, post-genomic clinical trials cannot be effectively addressed with current methodologies. An urgent need exists to access data in a uniform manner, to share information among different clinical and research centers, and to store data in secure repositories assuring the privacy of patients. Advancing Clinico-Genomic Trials (ACGT) was a European Commission funded Integrated Project that aimed at providing tools and methods to enhance the efficiency of clinical trials in the -omics era. The project, now completed after four years of work, involved the development of both a set of methodological approaches as well as tools and services and its testing in the context of real-world clinico-genomic scenarios. This paper describes the main experiences using the ACGT platform and its tools within one such scenario and highlights the very promising results obtained.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Soil voids manifest the cumulative effect of local pedogenic processes and ultimately influence soil behavior - especially as it pertains to aeration and hydrophysical properties. Because of the relatively weak attenuation of X-rays by air, compared with liquids or solids, non-disruptive CT scanning has become a very attractive tool for generating three-dimensional imagery of soil voids. One of the main steps involved in this analysis is the thresholding required to transform the original (greyscale) images into the type of binary representation (e.g., pores in white, solids in black) needed for fractal analysis or simulation with Lattice?Boltzmann models (Baveye et al., 2010). The objective of the current work is to apply an innovative approach to quantifying soil voids and pore networks in original X-ray CT imagery using Relative Entropy (Bird et al., 2006; Tarquis et al., 2008). These will be illustrated using typical imagery representing contrasting soil structures. Particular attention will be given to the need to consider the full 3D context of the CT imagery, as well as scaling issues, in the application and interpretation of this index.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One important issue emerging strongly in agriculture is related with the automatization of tasks, where the optical sensors play an important role. They provide images that must be conveniently processed. The most relevantimage processing procedures require the identification of green plants, in our experiments they come from barley and corn crops including weeds, so that some types of action can be carried out, including site-specific treatments with chemical products or mechanical manipulations. Also the identification of textures belonging to the soil could be useful to know some variables, such as humidity, smoothness or any others. Finally, from the point of view of the autonomous robot navigation, where the robot is equipped with the imaging system, some times it is convenient to know not only the soil information and the plants growing in the soil but also additional information supplied by global references based on specific areas. This implies that the images to be processed contain textures of three main types to be identified: green plants, soil and sky if any. This paper proposes a new automatic approach for segmenting these main textures and also to refine the identification of sub-textures inside the main ones. Concerning the green identification, we propose a new approach that exploits the performance of existing strategies by combining them. The combination takes into account the relevance of the information provided by each strategy based on the intensity variability. This makes an important contribution. The combination of thresholding approaches, for segmenting the soil and the sky, makes the second contribution; finally the adjusting of the supervised fuzzy clustering approach for identifying sub-textures automatically, makes the third finding. The performance of the method allows to verify its viability for automatic tasks in agriculture based on image processing

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El presente trabajo describe una nueva metodología para la detección automática del espacio glotal de imágenes laríngeas tomadas a partir de 15 vídeos grabados por el servicio ORL del hospital Gregorio Marañón de Madrid con luz estroboscópica. El sistema desarrollado está basado en el modelo de contornos activos (snake). El algoritmo combina en el pre-procesado, algunas técnicas tradicionales (umbralización y filtro de mediana) con técnicas más sofisticadas tales como filtrado anisotrópico. De esta forma, se obtiene una imagen apropiada para el uso de las snakes. El valor escogido para el umbral es del 85% del pico máximo del histograma de la imagen; sobre este valor la información de los píxeles no es relevante. El filtro anisotrópico permite distinguir dos niveles de intensidad, uno es el fondo y el otro es la glotis. La inicialización se basa en obtener el módulo del campo GVF; de esta manera se asegura un proceso automático para la selección del contorno inicial. El rendimiento del algoritmo se valida usando los coeficientes de Pratt y se compara contra una segmentación realizada manualmente y otro método automático basado en la transformada de watershed. SUMMARY: The present work describes a new methodology for the automatic detection of the glottal space from laryngeal images taken from 15 videos recorded by the ENT service of the Gregorio Marañon Hospital in Madrid with videostroboscopic equipment. The system is based on active contour models (snakes). The algorithm combines for the pre-processing, some traditional techniques (thresholding and median filter) with more sophisticated techniques such as anisotropic filtering. In this way, we obtain an appropriate image for the use of snake. The value selected for the threshold is 85% of the maximum peak of the image histogram; over this point the information of the pixels is not relevant. The anisotropic filter permits to distinguish two intensity levels, one is the background and the other one is the glottis. The initialization is based on the obtained magnitude by GVF field; in this manner an automatic process for the initial contour selection will be assured. The performance of the algorithm is tested using the Pratt coefficient and compared against a manual segmentation and another automatic method based on the watershed transformation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web 1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs. These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools. Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate. However, linguistic annotation tools have still some limitations, which can be summarised as follows: 1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.). 2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts. 3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc. A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved. In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool. Therefore, it would be quite useful to find a way to (i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools; (ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate. Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned. Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section. 2. GOALS OF THE PRESENT WORK As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based triples, as in the usual Semantic Web languages (namely RDF(S) and OWL), in order for the model to be considered suitable for the Semantic Web. Besides, to be useful for the Semantic Web, this model should provide a way to automate the annotation of web pages. As for the present work, this requirement involved reusing the linguistic annotation tools purchased by the OEG research group (http://www.oeg-upm.net), but solving beforehand (or, at least, minimising) some of their limitations. Therefore, this model had to minimise these limitations by means of the integration of several linguistic annotation tools into a common architecture. Since this integration required the interoperation of tools and their annotations, ontologies were proposed as the main technological component to make them effectively interoperate. From the very beginning, it seemed that the formalisation of the elements and the knowledge underlying linguistic annotations within an appropriate set of ontologies would be a great step forward towards the formulation of such a model (henceforth referred to as OntoTag). Obviously, first, to combine the results of the linguistic annotation tools that operated at the same level, their annotation schemas had to be unified (or, preferably, standardised) in advance. This entailed the unification (id. standardisation) of their tags (both their representation and their meaning), and their format or syntax. Second, to merge the results of the linguistic annotation tools operating at different levels, their respective annotation schemas had to be (a) made interoperable and (b) integrated. And third, in order for the resulting annotations to suit the Semantic Web, they had to be specified by means of an ontology-based vocabulary, and structured by means of ontology-based triples, as hinted above. Therefore, a new annotation scheme had to be devised, based both on ontologies and on this type of triples, which allowed for the combination and the integration of the annotations of any set of linguistic annotation tools. This annotation scheme was considered a fundamental part of the model proposed here, and its development was, accordingly, another major objective of the present work. All these goals, aims and objectives could be re-stated more clearly as follows: Goal 1: Development of a set of ontologies for the formalisation of the linguistic knowledge relating linguistic annotation. Sub-goal 1.1: Ontological formalisation of the EAGLES (1996a; 1996b) de facto standards for morphosyntactic and syntactic annotation, in a way that helps respect the triple structure recommended for annotations in these works (which is isomorphic to the triple structures used in the context of the Semantic Web). Sub-goal 1.2: Incorporation into this preliminary ontological formalisation of other existing standards and standard proposals relating the levels mentioned above, such as those currently under development within ISO/TC 37 (the ISO Technical Committee dealing with Terminology, which deals also with linguistic resources and annotations). Sub-goal 1.3: Generalisation and extension of the recommendations in EAGLES (1996a; 1996b) and ISO/TC 37 to the semantic level, for which no ISO/TC 37 standards have been developed yet. Sub-goal 1.4: Ontological formalisation of the generalisations and/or extensions obtained in the previous sub-goal as generalisations and/or extensions of the corresponding ontology (or ontologies). Sub-goal 1.5: Ontological formalisation of the knowledge required to link, combine and unite the knowledge represented in the previously developed ontology (or ontologies). Goal 2: Development of OntoTag’s annotation scheme, a standard-based abstract scheme for the hybrid (linguistically-motivated and ontological-based) annotation of texts. Sub-goal 2.1: Development of the standard-based morphosyntactic annotation level of OntoTag’s scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996a) and also the recommendations included in the ISO/MAF (2008) standard draft. Sub-goal 2.2: Development of the standard-based syntactic annotation level of the hybrid abstract scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996b) and the ISO/SynAF (2010) standard draft. Sub-goal 2.3: Development of the standard-based semantic annotation level of OntoTag’s (abstract) scheme. Sub-goal 2.4: Development of the mechanisms for a convenient integration of the three annotation levels already mentioned. These mechanisms should take into account the recommendations included in the ISO/LAF (2009) standard draft. Goal 3: Design of OntoTag’s (abstract) annotation architecture, an abstract architecture for the hybrid (semantic) annotation of texts (i) that facilitates the integration and interoperation of different linguistic annotation tools, and (ii) whose results comply with OntoTag’s annotation scheme. Sub-goal 3.1: Specification of the decanting processes that allow for the classification and separation, according to their corresponding levels, of the results of the linguistic tools annotating at several different levels. Sub-goal 3.2: Specification of the standardisation processes that allow (a) complying with the standardisation requirements of OntoTag’s annotation scheme, as well as (b) combining the results of those linguistic tools that share some level of annotation. Sub-goal 3.3: Specification of the merging processes that allow for the combination of the output annotations and the interoperation of those linguistic tools that share some level of annotation. Sub-goal 3.4: Specification of the merge processes that allow for the integration of the results and the interoperation of those tools performing their annotations at different levels. Goal 4: Generation of OntoTagger’s schema, a concrete instance of OntoTag’s abstract scheme for a concrete set of linguistic annotations. These linguistic annotations result from the tools and the resources available in the research group, namely • Bitext’s DataLexica (http://www.bitext.com/EN/datalexica.asp), • LACELL’s (POS) tagger (http://www.um.es/grupos/grupo-lacell/quees.php), • Connexor’s FDG (http://www.connexor.eu/technology/machinese/glossary/fdg/), and • EuroWordNet (Vossen et al., 1998). This schema should help evaluate OntoTag’s underlying hypotheses, stated below. Consequently, it should implement, at least, those levels of the abstract scheme dealing with the annotations of the set of tools considered in this implementation. This includes the morphosyntactic, the syntactic and the semantic levels. Goal 5: Implementation of OntoTagger’s configuration, a concrete instance of OntoTag’s abstract architecture for this set of linguistic tools and annotations. This configuration (1) had to use the schema generated in the previous goal; and (2) should help support or refute the hypotheses of this work as well (see the next section). Sub-goal 5.1: Implementation of the decanting processes that facilitate the classification and separation of the results of those linguistic resources that provide annotations at several different levels (on the one hand, LACELL’s tagger operates at the morphosyntactic level and, minimally, also at the semantic level; on the other hand, FDG operates at the morphosyntactic and the syntactic levels and, minimally, at the semantic level as well). Sub-goal 5.2: Implementation of the standardisation processes that allow (i) specifying the results of those linguistic tools that share some level of annotation according to the requirements of OntoTagger’s schema, as well as (ii) combining these shared level results. In particular, all the tools selected perform morphosyntactic annotations and they had to be conveniently combined by means of these processes. Sub-goal 5.3: Implementation of the merging processes that allow for the combination (and possibly the improvement) of the annotations and the interoperation of the tools that share some level of annotation (in particular, those relating the morphosyntactic level, as in the previous sub-goal). Sub-goal 5.4: Implementation of the merging processes that allow for the integration of the different standardised and combined annotations aforementioned, relating all the levels considered. Sub-goal 5.5: Improvement of the semantic level of this configuration by adding a named entity recognition, (sub-)classification and annotation subsystem, which also uses the named entities annotated to populate a domain ontology, in order to provide a concrete application of the present work in the two areas involved (the Semantic Web and Corpus Linguistics). 3. MAIN RESULTS: ASSESSMENT OF ONTOTAG’S UNDERLYING HYPOTHESES The model developed in the present thesis tries to shed some light on (i) whether linguistic annotation tools can effectively interoperate; (ii) whether their results can be combined and integrated; and, if they can, (iii) how they can, respectively, interoperate and be combined and integrated. Accordingly, several hypotheses had to be supported (or rejected) by the development of the OntoTag model and OntoTagger (its implementation). The hypotheses underlying OntoTag are surveyed below. Only one of the hypotheses (H.6) was rejected; the other five could be confirmed. H.1 The annotations of different levels (or layers) can be integrated into a sort of overall, comprehensive, multilayer and multilevel annotation, so that their elements can complement and refer to each other. • CONFIRMED by the development of: o OntoTag’s annotation scheme, o OntoTag’s annotation architecture, o OntoTagger’s (XML, RDF, OWL) annotation schemas, o OntoTagger’s configuration. H.2 Tool-dependent annotations can be mapped onto a sort of tool-independent annotations and, thus, can be standardised. • CONFIRMED by means of the standardisation phase incorporated into OntoTag and OntoTagger for the annotations yielded by the tools. H.3 Standardisation should ease: H.3.1: The interoperation of linguistic tools. H.3.2: The comparison, combination (at the same level and layer) and integration (at different levels or layers) of annotations. • H.3 was CONFIRMED by means of the development of OntoTagger’s ontology-based configuration: o Interoperation, comparison, combination and integration of the annotations of three different linguistic tools (Connexor’s FDG, Bitext’s DataLexica and LACELL’s tagger); o Integration of EuroWordNet-based, domain-ontology-based and named entity annotations at the semantic level. o Integration of morphosyntactic, syntactic and semantic annotations. H.4 Ontologies and Semantic Web technologies (can) play a crucial role in the standardisation of linguistic annotations, by providing consensual vocabularies and standardised formats for annotation (e.g., RDF triples). • CONFIRMED by means of the development of OntoTagger’s RDF-triple-based annotation schemas. H.5 The rate of errors introduced by a linguistic tool at a given level, when annotating, can be reduced automatically by contrasting and combining its results with the ones coming from other tools, operating at the same level. However, these other tools might be built following a different technological (stochastic vs. rule-based, for example) or theoretical (dependency vs. HPS-grammar-based, for instance) approach. • CONFIRMED by the results yielded by the evaluation of OntoTagger. H.6 Each linguistic level can be managed and annotated independently. • REJECTED: OntoTagger’s experiments and the dependencies observed among the morphosyntactic annotations, and between them and the syntactic annotations. In fact, Hypothesis H.6 was already rejected when OntoTag’s ontologies were developed. We observed then that several linguistic units stand on an interface between levels, belonging thereby to both of them (such as morphosyntactic units, which belong to both the morphological level and the syntactic level). Therefore, the annotations of these levels overlap and cannot be handled independently when merged into a unique multileveled annotation. 4. OTHER MAIN RESULTS AND CONTRIBUTIONS First, interoperability is a hot topic for both the linguistic annotation community and the whole Computer Science field. The specification (and implementation) of OntoTag’s architecture for the combination and integration of linguistic (annotation) tools and annotations by means of ontologies shows a way to make these different linguistic annotation tools and annotations interoperate in practice. Second, as mentioned above, the elements involved in linguistic annotation were formalised in a set (or network) of ontologies (OntoTag’s linguistic ontologies). • On the one hand, OntoTag’s network of ontologies consists of − The Linguistic Unit Ontology (LUO), which includes a mostly hierarchical formalisation of the different types of linguistic elements (i.e., units) identifiable in a written text; − The Linguistic Attribute Ontology (LAO), which includes also a mostly hierarchical formalisation of the different types of features that characterise the linguistic units included in the LUO; − The Linguistic Value Ontology (LVO), which includes the corresponding formalisation of the different values that the attributes in the LAO can take; − The OIO (OntoTag’s Integration Ontology), which  Includes the knowledge required to link, combine and unite the knowledge represented in the LUO, the LAO and the LVO;  Can be viewed as a knowledge representation ontology that describes the most elementary vocabulary used in the area of annotation. • On the other hand, OntoTag’s ontologies incorporate the knowledge included in the different standards and recommendations for linguistic annotation released so far, such as those developed within the EAGLES and the SIMPLE European projects or by the ISO/TC 37 committee: − As far as morphosyntactic annotations are concerned, OntoTag’s ontologies formalise the terms in the EAGLES (1996a) recommendations and their corresponding terms within the ISO Morphosyntactic Annotation Framework (ISO/MAF, 2008) standard; − As for syntactic annotations, OntoTag’s ontologies incorporate the terms in the EAGLES (1996b) recommendations and their corresponding terms within the ISO Syntactic Annotation Framework (ISO/SynAF, 2010) standard draft; − Regarding semantic annotations, OntoTag’s ontologies generalise and extend the recommendations in EAGLES (1996a; 1996b) and, since no stable standards or standard drafts have been released for semantic annotation by ISO/TC 37 yet, they incorporate the terms in SIMPLE (2000) instead; − The terms coming from all these recommendations and standards were supplemented by those within the ISO Data Category Registry (ISO/DCR, 2008) and also of the ISO Linguistic Annotation Framework (ISO/LAF, 2009) standard draft when developing OntoTag’s ontologies. Third, we showed that the combination of the results of tools annotating at the same level can yield better results (both in precision and in recall) than each tool separately. In particular, 1. OntoTagger clearly outperformed two of the tools integrated into its configuration, namely DataLexica and FDG in all the combination sub-phases in which they overlapped (i.e. POS tagging, lemma annotation and morphological feature annotation). As far as the remaining tool is concerned, i.e. LACELL’s tagger, it was also outperformed by OntoTagger in POS tagging and lemma annotation, and it did not behave better than OntoTagger in the morphological feature annotation layer. 2. As an immediate result, this implies that a) This type of combination architecture configurations can be applied in order to improve significantly the accuracy of linguistic annotations; and b) Concerning the morphosyntactic level, this could be regarded as a way of constructing more robust and more accurate POS tagging systems. Fourth, Semantic Web annotations are usually performed by humans or else by machine learning systems. Both of them leave much to be desired: the former, with respect to their annotation rate; the latter, with respect to their (average) precision and recall. In this work, we showed how linguistic tools can be wrapped in order to annotate automatically Semantic Web pages using ontologies. This entails their fast, robust and accurate semantic annotation. As a way of example, as mentioned in Sub-goal 5.5, we developed a particular OntoTagger module for the recognition, classification and labelling of named entities, according to the MUC and ACE tagsets (Chinchor, 1997; Doddington et al., 2004). These tagsets were further specified by means of a domain ontology, namely the Cinema Named Entities Ontology (CNEO). This module was applied to the automatic annotation of ten different web pages containing cinema reviews (that is, around 5000 words). In addition, the named entities annotated with this module were also labelled as instances (or individuals) of the classes included in the CNEO and, then, were used to populate this domain ontology. • The statistical results obtained from the evaluation of this particular module of OntoTagger can be summarised as follows. On the one hand, as far as recall (R) is concerned, (R.1) the lowest value was 76,40% (for file 7); (R.2) the highest value was 97, 50% (for file 3); and (R.3) the average value was 88,73%. On the other hand, as far as the precision rate (P) is concerned, (P.1) its minimum was 93,75% (for file 4); (R.2) its maximum was 100% (for files 1, 5, 7, 8, 9, and 10); and (R.3) its average value was 98,99%. • These results, which apply to the tasks of named entity annotation and ontology population, are extraordinary good for both of them. They can be explained on the basis of the high accuracy of the annotations provided by OntoTagger at the lower levels (mainly at the morphosyntactic level). However, they should be conveniently qualified, since they might be too domain- and/or language-dependent. It should be further experimented how our approach works in a different domain or a different language, such as French, English, or German. • In any case, the results of this application of Human Language Technologies to Ontology Population (and, accordingly, to Ontological Engineering) seem very promising and encouraging in order for these two areas to collaborate and complement each other in the area of semantic annotation. Fifth, as shown in the State of the Art of this work, there are different approaches and models for the semantic annotation of texts, but all of them focus on a particular view of the semantic level. Clearly, all these approaches and models should be integrated in order to bear a coherent and joint semantic annotation level. OntoTag shows how (i) these semantic annotation layers could be integrated together; and (ii) they could be integrated with the annotations associated to other annotation levels. Sixth, we identified some recommendations, best practices and lessons learned for annotation standardisation, interoperation and merge. They show how standardisation (via ontologies, in this case) enables the combination, integration and interoperation of different linguistic tools and their annotations into a multilayered (or multileveled) linguistic annotation, which is one of the hot topics in the area of Linguistic Annotation. And last but not least, OntoTag’s annotation scheme and OntoTagger’s annotation schemas show a way to formalise and annotate coherently and uniformly the different units and features associated to the different levels and layers of linguistic annotation. This is a great scientific step ahead towards the global standardisation of this area, which is the aim of ISO/TC 37 (in particular, Subcommittee 4, dealing with the standardisation of linguistic annotations and resources).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este Proyecto Fin de Carrera trata sobre el reconocimiento e identificación de caracteres de matrículas de automóviles. Este tipo de sistemas de reconocimiento también se los conoce mundialmente como sistemas ANPR ("Automatic Number Plate Recognition") o LPR ("License Plate Recognition"). La gran cantidad de vehículos y logística que se mueve cada segundo por todo el planeta, hace necesaria su registro para su tratamiento y control. Por ello, es necesario implementar un sistema que pueda identificar correctamente estos recursos, para su posterior procesado, construyendo así una herramienta útil, ágil y dinámica. El presente trabajo ha sido estructurado en varias partes. La primera de ellas nos muestra los objetivos y las motivaciones que se persiguen con la realización de este proyecto. En la segunda, se abordan y desarrollan todos los diferentes procesos teóricos y técnicos, así como matemáticos, que forman un sistema ANPR común, con el fin de implementar una aplicación práctica que pueda demostrar la utilidad de estos en cualquier situación. En la tercera, se desarrolla esa parte práctica en la que se apoya la base teórica del trabajo. En ésta se describen y desarrollan los diversos algoritmos, creados con el fin de estudiar y comprobar todo lo planteado hasta ahora, así como observar su comportamiento. Se implementan varios procesos característicos del reconocimiento de caracteres y patrones, como la detección de áreas o patrones, rotado y transformación de imágenes, procesos de detección de bordes, segmentación de caracteres y patrones, umbralización y normalización, extracción de características y patrones, redes neuronales, y finalmente el reconocimiento óptico de caracteres o comúnmente conocido como OCR. La última parte refleja los resultados obtenidos a partir del sistema de reconocimiento de caracteres implementado para el trabajo y se exponen las conclusiones extraídas a partir de éste. Finalmente se plantean las líneas futuras de mejora, desarrollo e investigación, para poder realizar un sistema más eficiente y global. This Thesis deals about license plate characters recognition and identification. These kinds of systems are also known worldwide as ANPR systems ("Automatic Number Plate Recognition") or LPR ("License Plate Recognition"). The great number of vehicles and logistics moving every second all over the world, requires a registration for treatment and control. Thereby, it’s therefore necessary to implement a system that can identify correctly these resources, for further processing, thus building a useful, flexible and dynamic tool. This work has been structured into several parts. The first one shows the objectives and motivations attained by the completion of this project. In the second part, it’s developed all the different theoretical and technical processes, forming a common ANPR system in order to implement a practical application that can demonstrate the usefulness of these ones on any situation. In the third, the practical part is developed, which is based on the theoretical work. In this one are described and developed various algorithms, created to study and verify all the questions until now suggested, and complain the behavior of these systems. Several recognition of characters and patterns characteristic processes are implemented, such as areas or patterns detection, image rotation and transformation, edge detection processes, patterns and character segmentation, thresholding and normalization, features and patterns extraction, neural networks, and finally the optical character recognition or commonly known like OCR. The last part shows the results obtained from the character recognition system implemented for this thesis and the outlines conclusions drawn from it. Finally, future lines of improvement, research and development are proposed, in order to make a more efficient and comprehensive system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La mejora en la eficiencia energética y la reducción de la tasa de fallos en los contactos lubricados son aspectos que resultan de gran interés en numerosos sectores de la industria, y plantean en estos momentos nuevas dificultades operativas y retos para un futuro próximo. Los avances tecnológicos han incrementado las exigencias técnicas que se requieren a los aceites para cumplir su función al extender sus variables operativas a un mayor espectro de aplicaciones, tanto de condiciones de funcionamiento como a la gran variedad de nuevos materiales constitutivos de los engranajes en los que se tiene que utilizar. Por ello, actualmente se está incentivado el desarrollo de nuevos procedimientos que permitan comprender el comportamiento de este tipo de contactos lubricados, con el fin de lograr mejoras técnicas en su diseño y la correcta selección del aceite. En esta Tesis Doctoral se presenta una metodología de cálculo numérico que permite simular el comportamiento de contactos elastohidrodinámicos (EHD) puntuales, como puede ser el caso de un rodamiento. La resolución de este problema presenta diversas complejidades matemáticas y exige el desarrollo de un elaborado procedimiento de cálculo basado en técnicas multinivel. Para hacer del procedimiento una herramienta válida en un gran número de condiciones de funcionamiento y tipos de lubricantes, se ha tenido en cuenta en el cálculo la posible aparición de comportamientos no-Newtonianos del lubricante y fenómenos de generación y disipación de calor, provocados por el movimiento relativo del fluido y las superficies en contacto. Para la validación del procedimiento, se han contrastado los resultados numéricos obtenidos con nuestro método, con los resultados numéricos y experimentales publicados por otros autores y con valores experimentales propios medidos en un equipo de ensayo de contacto puntual tipo MTM. El desarrollo de este programa ha dotado a la División de Ingeniería de Máquinas de una herramienta que ha permitido, y sobre todo va permitir, evaluar la importancia de cada uno de los parámetros reológicos en los diferentes problemas que va a tener que abordar, evaluación que hasta el momento se realizaba con métodos aproximados que describen la fenomenología con mucha menos precisión. A la hora de emplear nuestro procedimiento numérico para simular situaciones reales, nos hemos encontrado con el obstáculo de que es muy complicado encontrar, en la bibliografía y bases de datos, los parámetros que caracterizan el comportamiento reológico del lubricante en las condiciones de presión, temperatura y grado de cizallamiento habituales en las que se trabaja en este tipo de contactos, y las pocas medidas que existen para estas condiciones de funcionamiento son poco fiables. Por ello como complemento al objetivo principal de esta Tesis hemos desarrollado una metodología para caracterizar los lubricantes en estas condiciones extremas. Dicha metodología está basada en la buena descripción que hace nuestro programa del coeficiente de fricción, lo que nos ha permitido obtener los parámetros reológicos del III lubricante a partir de las medidas experimentales del coeficiente de fricción generado en un equipo MTM lubricado con el lubricante que se desea caracterizar. Madrid, Octubre de 2012 IV Improving energy efficiency and reducing the failure rate in lubricated contacts are issues that are of great interest in many sectors of industry, and right now posed operational difficulties and new challenges for the near future. Technological advances have increased the technical demands required to oils to fulfil its role by extending its operational variables to a wider range of applications, both operating conditions and to the wide variety of new materials which constitute the gear in which must be used. For this reason, has being encouraged currently to develop new procedures to understand the behaviour of this type of lubricated contacts, in order to achieve improvements in design techniques and the correct oil selection. In this Thesis we present a numerical methodology to simulate the puntual elastohydrodynamic contact behaviour (EHD), such as a roller bearing. The resolution of this problem presents various mathematical complexities and requires the development of an elaborate calculation procedure based on multilevel techniques. To make the procedure a valid tool in a wide range of operating conditions and types of lubricants, has been taken into account in calculating the possible occurrence of non-Newtonian behaviour of the lubricant and phenomena of generation and dissipation of heat, caused by the fluid relative motion and contacting surfaces. For the validation of the method, we have compared the numerical results obtained with our method with numerical and experimental results published by other authors also with own experimental values measured on point-contact test equipment MTM. The development of this program has provided the Machines Engineering Division of a tool that has allowed, and especially will allow to evaluate the importance of each of the rheological parameters on the various problems that will have to be addressed, evaluation performed hitherto described methods that phenomenology approximated with much less accuracy. When using our numerical procedure to simulate real situations we have encountered the obstacle that is very difficult to find, in the literature and database, parameters characterizing the rheological behaviour of the lubricant in the usual conditions of pressure, temperature and shear rate in which you work in this type of contact, and the few measures that exist for these operating conditions are unreliable. Thus in addition to the main objective of this thesis, we have developed a methodology to characterize the lubricants in these extreme conditions. This methodology is based on the good description, which makes our program, of the coefficient of friction, that allowed us to obtain the lubricant rheological parameters from experimental measurements of the friction coefficient generated on lubricated MTM equipment with the lubricant to be characterized.