964 resultados para Point Data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Information on the pivot point of a turning ship is collected, taking into account practical notes and manuals on ship maneuvering as well as experimental data and simulated results which all together reveal a consistent behavior when varying water depth or some ship particulars. Results from the studies already carried out on the Riverine Support Patrol Vessel (RSPV) of the Colombian Navy are included in this one, in order to estimate the pivot point’s position and to contrast those results with theory and available empirical observations. Linear manoeuvrability theory is tested and its results show poor approximation with respect to the kinematic equations. As to the depth variation effect, by means of fullscale experiments it is confirmed that the pivot point’s position, when going to shallow water, always varies in the same way, proving to be coherent with the available information on this phenomenon.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As a result of the variscan collision, several allochtonous complexes were emplaced on the Iberian margin in Devonian times, among them the Cabo Ortegal Complex comprising the Moeche ophiolitic sequence. Copper has been won from several mines (Piquitos I & II, Barqueira, Maruxa) from disseminated ores and thin massive sulphide layers in the Moeche Unit, a strongly deformed meta-volcanic sequence comprising mainly quartz-chlorite schists and mylonites, which defines the top of the ophiolite. The ores were metamorphosed and strongly deformed under brittle conditions (for pyrite), but their textures are often apparently post-deformational, due to very common solution-transfer processes; they are composed mostly of pyrite and chalcopyrite, with minor sphalerite, pyrrhotite, etc., and with traces of native gold and PGE. The geology, mineralogy, and geochemistry of the orebodies relate closely to VMS of the Cu-Zn (Cyprus) type. Fluid inclusion studies allowed an estimation of metamorphic conditions at pressures of 2/2’5 kb and T 325/350ºC. New determinations using the chlorite geothermometer yield temperatures around 320 ºC, corresponding to pressures near 2 kb according to the isochores deduced from the fluid inclusion study, although in the Barqueira mine higher temperatures, up to 350 ºC, are found, corresponding to presssures up to 2’5 kb. Pb isotopic compositions of pyrite point to a double source of Pb, i.e. a main mantle and a subordinate crustal source. The values for 87SR/86Sr in pyrite support this interpretation, but some results suggest later mobilization in an open system, corresponding to solution-transfer. Age determinations of pyrite deduced from the Pb isotope uranogenic graph, ≈ 480 Ma, do not fit with the metamorphic ages published for the Moeche Unit, and might point to the age of Pb extraction from the mantle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In parallel to the effort of creating Open Linked Data for the World Wide Web there is a number of projects aimed for developing the same technologies but in the context of their usage in closed environments such as private enterprises. In the paper, we present results of research on interlinking structured data for use in Idea Management Systems - a still rare breed of knowledge management systems dedicated to innovation management. In our study, we show the process of extending an ontology that initially covers only the Idea Management System structure towards the concept of linking with distributed enterprise data and public data using Semantic Web technologies. Furthermore we point out how the established links can help to solve the key problems of contemporary Idea Management Systems

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract interpretation-based data-flow analysis of logic programs is at this point relatively well understood from the point of view of general frameworks and abstract domains. On the other hand, comparatively little attention has been given to the problems which arise when analysis of a full, practical dialect of the Prolog language is attempted, and only few solutions to these problems have been proposed to date. Such problems relate to dealing correctly with all builtins, including meta-logical and extra-logical predicates, with dynamic predicates (where the program is modified during execution), and with the absence of certain program text during compilation. Existing proposals for dealing with such issues generally restrict in one way or another the classes of programs which can be analyzed if the information from analysis is to be used for program optimization. This paper attempts to fill this gap by considering a full dialect of Prolog, essentially following the recently proposed ISO standard, pointing out the problems that may arise in the analysis of such a dialect, and proposing a combination of known and novel solutions that together allow the correct analysis of arbitrary programs using the full power of the language.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aircraft Operators Companies (AOCs) are always willing to keep the cost of a flight as low as possible. These costs could be modelled using a function of the fuel consumption, time of flight and fixed cost (over flight cost, maintenance, etc.). These are strongly dependant on the atmospheric conditions, the presence of winds and the aircraft performance. For this reason, much research effort is being put in the development of numerical and graphical techniques for defining the optimal trajectory. This paper presents a different approach to accommodate AOCs preferences, adding value to their activities, through the development of a tool, called aircraft trajectory simulator. This tool is able to simulate the actual flight of an aircraft with the constraints imposed. The simulator is based on a point mass model of the aircraft. The aim of this paper is to evaluate 3DoF aircraft model errors with BADA data through real data from Flight Data Recorder FDR. Therefore, to validate the proposed simulation tool a comparative analysis of the state variables vector is made between an actual flight and the same flight using the simulator. Finally, an example of a cruise phase is presented, where a conventional levelled flight is compared with a continuous climb flight. The comparison results show the potential benefits of following user-preferred routes for commercial flights.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of new-generation intelligent vehicle technologies will lead to a better level of road safety and CO2 emission reductions. However, the weak point of all these systems is their need for comprehensive and reliable data. For traffic data acquisition, two sources are currently available: 1) infrastructure sensors and 2) floating vehicles. The former consists of a set of fixed point detectors installed in the roads, and the latter consists of the use of mobile probe vehicles as mobile sensors. However, both systems still have some deficiencies. The infrastructure sensors retrieve information fromstatic points of the road, which are spaced, in some cases, kilometers apart. This means that the picture of the actual traffic situation is not a real one. This deficiency is corrected by floating cars, which retrieve dynamic information on the traffic situation. Unfortunately, the number of floating data vehicles currently available is too small and insufficient to give a complete picture of the road traffic. In this paper, we present a floating car data (FCD) augmentation system that combines information fromfloating data vehicles and infrastructure sensors, and that, by using neural networks, is capable of incrementing the amount of FCD with virtual information. This system has been implemented and tested on actual roads, and the results show little difference between the data supplied by the floating vehicles and the virtual vehicles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to the advancement of both, information technology in general, and databases in particular; data storage devices are becoming cheaper and data processing speed is increasing. As result of this, organizations tend to store large volumes of data holding great potential information. Decision Support Systems, DSS try to use the stored data to obtain valuable information for organizations. In this paper, we use both data models and use cases to represent the functionality of data processing in DSS following Software Engineering processes. We propose a methodology to develop DSS in the Analysis phase, respective of data processing modeling. We have used, as a starting point, a data model adapted to the semantics involved in multidimensional databases or data warehouses, DW. Also, we have taken an algorithm that provides us with all the possible ways to automatically cross check multidimensional model data. Using the aforementioned, we propose diagrams and descriptions of use cases, which can be considered as patterns representing the DSS functionality, in regard to DW data processing, DW on which DSS are based. We highlight the reusability and automation benefits that this can be achieved, and we think this study can serve as a guide in the development of DSS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The writer would like to point out the existence of a very remarkable Spanish cable-stayed bridge built in 1925, wich is thus older than the first one recorded by the authors (and probably the pioneer in concrete-deck type). The Tempul Aqueduct was designed by the famous Professor Educardo Torroja. The deck is a concrete box girder sustained by two planes of 3 mm diam 37-wire double cables working at 27 kg/mm2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are a number of factors that contribute to the success of dental implant operations. Among others, is the choice of location in which the prosthetic tooth is to be implanted. This project offers a new approach to analyse jaw tissue for the purpose of selecting suitable locations for teeth implant operations. The application developed takes as input jaw computed tomography stack of slices and trims data outside the jaw area, which is the point of interest. It then reconstructs a three dimensional model of the jaw highlighting points of interest on the reconstructed model. On another hand, data mining techniques have been utilised in order to construct a prediction model based on an information dataset of previous dental implant operations with observed stability values. The goal is to find patterns within the dataset that would help predicting the success likelihood of an implant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Una apropiada evaluación de los márgenes de seguridad de una instalación nuclear, por ejemplo, una central nuclear, tiene en cuenta todas las incertidumbres que afectan a los cálculos de diseño, funcionanmiento y respuesta ante accidentes de dicha instalación. Una fuente de incertidumbre son los datos nucleares, que afectan a los cálculos neutrónicos, de quemado de combustible o activación de materiales. Estos cálculos permiten la evaluación de las funciones respuesta esenciales para el funcionamiento correcto durante operación, y también durante accidente. Ejemplos de esas respuestas son el factor de multiplicación neutrónica o el calor residual después del disparo del reactor. Por tanto, es necesario evaluar el impacto de dichas incertidumbres en estos cálculos. Para poder realizar los cálculos de propagación de incertidumbres, es necesario implementar metodologías que sean capaces de evaluar el impacto de las incertidumbres de estos datos nucleares. Pero también es necesario conocer los datos de incertidumbres disponibles para ser capaces de manejarlos. Actualmente, se están invirtiendo grandes esfuerzos en mejorar la capacidad de analizar, manejar y producir datos de incertidumbres, en especial para isótopos importantes en reactores avanzados. A su vez, nuevos programas/códigos están siendo desarrollados e implementados para poder usar dichos datos y analizar su impacto. Todos estos puntos son parte de los objetivos del proyecto europeo ANDES, el cual ha dado el marco de trabajo para el desarrollo de esta tesis doctoral. Por tanto, primero se ha llevado a cabo una revisión del estado del arte de los datos nucleares y sus incertidumbres, centrándose en los tres tipos de datos: de decaimiento, de rendimientos de fisión y de secciones eficaces. A su vez, se ha realizado una revisión del estado del arte de las metodologías para la propagación de incertidumbre de estos datos nucleares. Dentro del Departamento de Ingeniería Nuclear (DIN) se propuso una metodología para la propagación de incertidumbres en cálculos de evolución isotópica, el Método Híbrido. Esta metodología se ha tomado como punto de partida para esta tesis, implementando y desarrollando dicha metodología, así como extendiendo sus capacidades. Se han analizado sus ventajas, inconvenientes y limitaciones. El Método Híbrido se utiliza en conjunto con el código de evolución isotópica ACAB, y se basa en el muestreo por Monte Carlo de los datos nucleares con incertidumbre. En esta metodología, se presentan diferentes aproximaciones según la estructura de grupos de energía de las secciones eficaces: en un grupo, en un grupo con muestreo correlacionado y en multigrupos. Se han desarrollado diferentes secuencias para usar distintas librerías de datos nucleares almacenadas en diferentes formatos: ENDF-6 (para las librerías evaluadas), COVERX (para las librerías en multigrupos de SCALE) y EAF (para las librerías de activación). Gracias a la revisión del estado del arte de los datos nucleares de los rendimientos de fisión se ha identificado la falta de una información sobre sus incertidumbres, en concreto, de matrices de covarianza completas. Además, visto el renovado interés por parte de la comunidad internacional, a través del grupo de trabajo internacional de cooperación para evaluación de datos nucleares (WPEC) dedicado a la evaluación de las necesidades de mejora de datos nucleares mediante el subgrupo 37 (SG37), se ha llevado a cabo una revisión de las metodologías para generar datos de covarianza. Se ha seleccionando la actualización Bayesiana/GLS para su implementación, y de esta forma, dar una respuesta a dicha falta de matrices completas para rendimientos de fisión. Una vez que el Método Híbrido ha sido implementado, desarrollado y extendido, junto con la capacidad de generar matrices de covarianza completas para los rendimientos de fisión, se han estudiado diferentes aplicaciones nucleares. Primero, se estudia el calor residual tras un pulso de fisión, debido a su importancia para cualquier evento después de la parada/disparo del reactor. Además, se trata de un ejercicio claro para ver la importancia de las incertidumbres de datos de decaimiento y de rendimientos de fisión junto con las nuevas matrices completas de covarianza. Se han estudiado dos ciclos de combustible de reactores avanzados: el de la instalación europea para transmutación industrial (EFIT) y el del reactor rápido de sodio europeo (ESFR), en los cuales se han analizado el impacto de las incertidumbres de los datos nucleares en la composición isotópica, calor residual y radiotoxicidad. Se han utilizado diferentes librerías de datos nucleares en los estudios antreriores, comparando de esta forma el impacto de sus incertidumbres. A su vez, mediante dichos estudios, se han comparando las distintas aproximaciones del Método Híbrido y otras metodologías para la porpagación de incertidumbres de datos nucleares: Total Monte Carlo (TMC), desarrollada en NRG por A.J. Koning y D. Rochman, y NUDUNA, desarrollada en AREVA GmbH por O. Buss y A. Hoefer. Estas comparaciones demostrarán las ventajas del Método Híbrido, además de revelar sus limitaciones y su rango de aplicación. ABSTRACT For an adequate assessment of safety margins of nuclear facilities, e.g. nuclear power plants, it is necessary to consider all possible uncertainties that affect their design, performance and possible accidents. Nuclear data are a source of uncertainty that are involved in neutronics, fuel depletion and activation calculations. These calculations can predict critical response functions during operation and in the event of accident, such as decay heat and neutron multiplication factor. Thus, the impact of nuclear data uncertainties on these response functions needs to be addressed for a proper evaluation of the safety margins. Methodologies for performing uncertainty propagation calculations need to be implemented in order to analyse the impact of nuclear data uncertainties. Nevertheless, it is necessary to understand the current status of nuclear data and their uncertainties, in order to be able to handle this type of data. Great eórts are underway to enhance the European capability to analyse/process/produce covariance data, especially for isotopes which are of importance for advanced reactors. At the same time, new methodologies/codes are being developed and implemented for using and evaluating the impact of uncertainty data. These were the objectives of the European ANDES (Accurate Nuclear Data for nuclear Energy Sustainability) project, which provided a framework for the development of this PhD Thesis. Accordingly, first a review of the state-of-the-art of nuclear data and their uncertainties is conducted, focusing on the three kinds of data: decay, fission yields and cross sections. A review of the current methodologies for propagating nuclear data uncertainties is also performed. The Nuclear Engineering Department of UPM has proposed a methodology for propagating uncertainties in depletion calculations, the Hybrid Method, which has been taken as the starting point of this thesis. This methodology has been implemented, developed and extended, and its advantages, drawbacks and limitations have been analysed. It is used in conjunction with the ACAB depletion code, and is based on Monte Carlo sampling of variables with uncertainties. Different approaches are presented depending on cross section energy-structure: one-group, one-group with correlated sampling and multi-group. Differences and applicability criteria are presented. Sequences have been developed for using different nuclear data libraries in different storing-formats: ENDF-6 (for evaluated libraries) and COVERX (for multi-group libraries of SCALE), as well as EAF format (for activation libraries). A revision of the state-of-the-art of fission yield data shows inconsistencies in uncertainty data, specifically with regard to complete covariance matrices. Furthermore, the international community has expressed a renewed interest in the issue through the Working Party on International Nuclear Data Evaluation Co-operation (WPEC) with the Subgroup (SG37), which is dedicated to assessing the need to have complete nuclear data. This gives rise to this review of the state-of-the-art of methodologies for generating covariance data for fission yields. Bayesian/generalised least square (GLS) updating sequence has been selected and implemented to answer to this need. Once the Hybrid Method has been implemented, developed and extended, along with fission yield covariance generation capability, different applications are studied. The Fission Pulse Decay Heat problem is tackled first because of its importance during events after shutdown and because it is a clean exercise for showing the impact and importance of decay and fission yield data uncertainties in conjunction with the new covariance data. Two fuel cycles of advanced reactors are studied: the European Facility for Industrial Transmutation (EFIT) and the European Sodium Fast Reactor (ESFR), and response function uncertainties such as isotopic composition, decay heat and radiotoxicity are addressed. Different nuclear data libraries are used and compared. These applications serve as frameworks for comparing the different approaches of the Hybrid Method, and also for comparing with other methodologies: Total Monte Carlo (TMC), developed at NRG by A.J. Koning and D. Rochman, and NUDUNA, developed at AREVA GmbH by O. Buss and A. Hoefer. These comparisons reveal the advantages, limitations and the range of application of the Hybrid Method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Light Detection and Ranging (LIDAR) provides high horizontal and vertical resolution of spatial data located in point cloud images, and is increasingly being used in a number of applications and disciplines, which have concentrated on the exploit and manipulation of the data using mainly its three dimensional nature. Bathymetric LIDAR systems and data are mainly focused to map depths in shallow and clear waters with a high degree of accuracy. Additionally, the backscattering produced by the different materials distributed over the bottom surface causes that the returned intensity signal contains important information about the reflection properties of these materials. Processing conveniently these values using a Simplified Radiative Transfer Model, allows the identification of different sea bottom types. This paper presents an original method for the classification of sea bottom by means of information processing extracted from the images generated through LIDAR data. The results are validated using a vector database containing benthic information derived by marine surveys.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La diabetes mellitus es un trastorno en la metabolización de los carbohidratos, caracterizado por la nula o insuficiente segregación de insulina (hormona producida por el páncreas), como resultado del mal funcionamiento de la parte endocrina del páncreas, o de una creciente resistencia del organismo a esta hormona. Esto implica, que tras el proceso digestivo, los alimentos que ingerimos se transforman en otros compuestos químicos más pequeños mediante los tejidos exocrinos. La ausencia o poca efectividad de esta hormona polipéptida, no permite metabolizar los carbohidratos ingeridos provocando dos consecuencias: Aumento de la concentración de glucosa en sangre, ya que las células no pueden metabolizarla; consumo de ácidos grasos mediante el hígado, liberando cuerpos cetónicos para aportar la energía a las células. Esta situación expone al enfermo crónico, a una concentración de glucosa en sangre muy elevada, denominado hiperglucemia, la cual puede producir a medio o largo múltiples problemas médicos: oftalmológicos, renales, cardiovasculares, cerebrovasculares, neurológicos… La diabetes representa un gran problema de salud pública y es la enfermedad más común en los países desarrollados por varios factores como la obesidad, la vida sedentaria, que facilitan la aparición de esta enfermedad. Mediante el presente proyecto trabajaremos con los datos de experimentación clínica de pacientes con diabetes de tipo 1, enfermedad autoinmune en la que son destruidas las células beta del páncreas (productoras de insulina) resultando necesaria la administración de insulina exógena. Dicho esto, el paciente con diabetes tipo 1 deberá seguir un tratamiento con insulina administrada por la vía subcutánea, adaptado a sus necesidades metabólicas y a sus hábitos de vida. Para abordar esta situación de regulación del control metabólico del enfermo, mediante una terapia de insulina, no serviremos del proyecto “Páncreas Endocrino Artificial” (PEA), el cual consta de una bomba de infusión de insulina, un sensor continuo de glucosa, y un algoritmo de control en lazo cerrado. El objetivo principal del PEA es aportar al paciente precisión, eficacia y seguridad en cuanto a la normalización del control glucémico y reducción del riesgo de hipoglucemias. El PEA se instala mediante vía subcutánea, por lo que, el retardo introducido por la acción de la insulina, el retardo de la medida de glucosa, así como los errores introducidos por los sensores continuos de glucosa cuando, se descalibran dificultando el empleo de un algoritmo de control. Llegados a este punto debemos modelar la glucosa del paciente mediante sistemas predictivos. Un modelo, es todo aquel elemento que nos permita predecir el comportamiento de un sistema mediante la introducción de variables de entrada. De este modo lo que conseguimos, es una predicción de los estados futuros en los que se puede encontrar la glucosa del paciente, sirviéndonos de variables de entrada de insulina, ingesta y glucosa ya conocidas, por ser las sucedidas con anterioridad en el tiempo. Cuando empleamos el predictor de glucosa, utilizando parámetros obtenidos en tiempo real, el controlador es capaz de indicar el nivel futuro de la glucosa para la toma de decisones del controlador CL. Los predictores que se están empleando actualmente en el PEA no están funcionando correctamente por la cantidad de información y variables que debe de manejar. Data Mining, también referenciado como Descubrimiento del Conocimiento en Bases de Datos (Knowledge Discovery in Databases o KDD), ha sido definida como el proceso de extracción no trivial de información implícita, previamente desconocida y potencialmente útil. Todo ello, sirviéndonos las siguientes fases del proceso de extracción del conocimiento: selección de datos, pre-procesado, transformación, minería de datos, interpretación de los resultados, evaluación y obtención del conocimiento. Con todo este proceso buscamos generar un único modelo insulina glucosa que se ajuste de forma individual a cada paciente y sea capaz, al mismo tiempo, de predecir los estados futuros glucosa con cálculos en tiempo real, a través de unos parámetros introducidos. Este trabajo busca extraer la información contenida en una base de datos de pacientes diabéticos tipo 1 obtenidos a partir de la experimentación clínica. Para ello emplearemos técnicas de Data Mining. Para la consecución del objetivo implícito a este proyecto hemos procedido a implementar una interfaz gráfica que nos guía a través del proceso del KDD (con información gráfica y estadística) de cada punto del proceso. En lo que respecta a la parte de la minería de datos, nos hemos servido de la denominada herramienta de WEKA, en la que a través de Java controlamos todas sus funciones, para implementarlas por medio del programa creado. Otorgando finalmente, una mayor potencialidad al proyecto con la posibilidad de implementar el servicio de los dispositivos Android por la potencial capacidad de portar el código. Mediante estos dispositivos y lo expuesto en el proyecto se podrían implementar o incluso crear nuevas aplicaciones novedosas y muy útiles para este campo. Como conclusión del proyecto, y tras un exhaustivo análisis de los resultados obtenidos, podemos apreciar como logramos obtener el modelo insulina-glucosa de cada paciente. ABSTRACT. The diabetes mellitus is a metabolic disorder, characterized by the low or none insulin production (a hormone produced by the pancreas), as a result of the malfunctioning of the endocrine pancreas part or by an increasing resistance of the organism to this hormone. This implies that, after the digestive process, the food we consume is transformed into smaller chemical compounds, through the exocrine tissues. The absence or limited effectiveness of this polypeptide hormone, does not allow to metabolize the ingested carbohydrates provoking two consequences: Increase of the glucose concentration in blood, as the cells are unable to metabolize it; fatty acid intake through the liver, releasing ketone bodies to provide energy to the cells. This situation exposes the chronic patient to high blood glucose levels, named hyperglycemia, which may cause in the medium or long term multiple medical problems: ophthalmological, renal, cardiovascular, cerebrum-vascular, neurological … The diabetes represents a great public health problem and is the most common disease in the developed countries, by several factors such as the obesity or sedentary life, which facilitate the appearance of this disease. Through this project we will work with clinical experimentation data of patients with diabetes of type 1, autoimmune disease in which beta cells of the pancreas (producers of insulin) are destroyed resulting necessary the exogenous insulin administration. That said, the patient with diabetes type 1 will have to follow a treatment with insulin, administered by the subcutaneous route, adapted to his metabolic needs and to his life habits. To deal with this situation of metabolic control regulation of the patient, through an insulin therapy, we shall be using the “Endocrine Artificial Pancreas " (PEA), which consists of a bomb of insulin infusion, a constant glucose sensor, and a control algorithm in closed bow. The principal aim of the PEA is providing the patient precision, efficiency and safety regarding the normalization of the glycemic control and hypoglycemia risk reduction". The PEA establishes through subcutaneous route, consequently, the delay introduced by the insulin action, the delay of the glucose measure, as well as the mistakes introduced by the constant glucose sensors when, decalibrate, impede the employment of an algorithm of control. At this stage we must shape the patient glucose levels through predictive systems. A model is all that element or set of elements which will allow us to predict the behavior of a system by introducing input variables. Thus what we obtain, is a prediction of the future stages in which it is possible to find the patient glucose level, being served of input insulin, ingestion and glucose variables already known, for being the ones happened previously in the time. When we use the glucose predictor, using obtained real time parameters, the controller is capable of indicating the future level of the glucose for the decision capture CL controller. The predictors that are being used nowadays in the PEA are not working correctly for the amount of information and variables that it need to handle. Data Mining, also indexed as Knowledge Discovery in Databases or KDD, has been defined as the not trivial extraction process of implicit information, previously unknown and potentially useful. All this, using the following phases of the knowledge extraction process: selection of information, pre- processing, transformation, data mining, results interpretation, evaluation and knowledge acquisition. With all this process we seek to generate the unique insulin glucose model that adjusts individually and in a personalized way for each patient form and being capable, at the same time, of predicting the future conditions with real time calculations, across few input parameters. This project of end of grade seeks to extract the information contained in a database of type 1 diabetics patients, obtained from clinical experimentation. For it, we will use technologies of Data Mining. For the attainment of the aim implicit to this project we have proceeded to implement a graphical interface that will guide us across the process of the KDD (with graphical and statistical information) of every point of the process. Regarding the data mining part, we have been served by a tool called WEKA's tool called, in which across Java, we control all of its functions to implement them by means of the created program. Finally granting a higher potential to the project with the possibility of implementing the service for Android devices, porting the code. Through these devices and what has been exposed in the project they might help or even create new and very useful applications for this field. As a conclusion of the project, and after an exhaustive analysis of the obtained results, we can show how we achieve to obtain the insulin–glucose model for each patient.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vector reconstruction of objects from an unstructured point cloud obtained with a LiDAR-based system (light detection and ranging) is one of the most promising methods to build three dimensional models of orchards. The cylinder fitting method for woody structure reconstruction of leafless trees from point clouds obtained with a mobile terrestrial laser scanner (MTLS) has been analysed. The advantage of this method is that it performs reconstruction in a single step. The most time consuming part of the algorithm is generation of the cylinder direction, which must be recalculated at the inclusion of each point in the cylinder. The tree skeleton is obtained at the same time as the cluster of cylinders is formed. The method does not guarantee a unique convergence and the reconstruction parameter values must be carefully chosen. A balanced processing of clusters has also been defined which has proven to be very efficient in terms of processing time by following the hierarchy of branches, predecessors and successors. The algorithm was applied to simulated MTLS of virtual orchard models and to MTLS data of real orchards. The constraints applied in the method have been reviewed to ensure better convergence and simpler use of parameters. The results obtained show a correct reconstruction of the woody structure of the trees and the algorithm runs in linear logarithmic time