509 resultados para Paracalanus quasimodo
Resumo:
The SESAME dataset contains mesozooplankton data collected during April 2008 in the North-Western part of Black Sea (between 44°46' N and 42°29'N latitude and 28°64'E and 30°59'E longitude). Mesozooplankton sampling was undertaken at 9 stations where samples were collected using a Hensen net in the 0-10, 10-25, 25-50, 50-100, 100-150, 150-200 m layer. The dataset includes 29 samples analysed for mesozooplankton species composition and abundance. The entire sample or an aliquot (1/2 to 1/4) was analyzed under the binocular microscope. Calculations of zooplankton abundance are made by the following formulae, in accordance with the Report of the third ICES/HELCOM workshop on quality assurance of Biological measurements Warnemünde, Germany, 1996. M - number of counted specimens (ind.), Vf - volume of filtrated water (m³), and K - counted part of sample. (http://www2008.io-warnemuende.de/research/helcom_zp/documents/qa_zp_part.pdf)
Resumo:
Nowadays, data mining is based on low-level specications of the employed techniques typically bounded to a specic analysis platform. Therefore, data mining lacks a modelling architecture that allows analysts to consider it as a truly software-engineering process. Here, we propose a model-driven approach based on (i) a conceptual modelling framework for data mining, and (ii) a set of model transformations to automatically generate both the data under analysis (via data-warehousing technology) and the analysis models for data mining (tailored to a specic platform). Thus, analysts can concentrate on the analysis problem via conceptual data-mining models instead of low-level programming tasks related to the underlying-platform technical details. These tasks are now entrusted to the model-transformations scaffolding.
Resumo:
Data mining is one of the most important analysis techniques to automatically extract knowledge from large amount of data. Nowadays, data mining is based on low-level specifications of the employed techniques typically bounded to a specific analysis platform. Therefore, data mining lacks a modelling architecture that allows analysts to consider it as a truly software-engineering process. Bearing in mind this situation, we propose a model-driven approach which is based on (i) a conceptual modelling framework for data mining, and (ii) a set of model transformations to automatically generate both the data under analysis (that is deployed via data-warehousing technology) and the analysis models for data mining (tailored to a specific platform). Thus, analysts can concentrate on understanding the analysis problem via conceptual data-mining models instead of wasting efforts on low-level programming tasks related to the underlying-platform technical details. These time consuming tasks are now entrusted to the model-transformations scaffolding. The feasibility of our approach is shown by means of a hypothetical data-mining scenario where a time series analysis is required.
Resumo:
Geographic knowledge discovery (GKD) is the process of extracting information and knowledge from massive georeferenced databases. Usually the process is accomplished by two different systems, the Geographic Information Systems (GIS) and the data mining engines. However, the development of those systems is a complex task due to it does not follow a systematic, integrated and standard methodology. To overcome these pitfalls, in this paper, we propose a modeling framework that addresses the development of the different parts of a multilayer GKD process. The main advantages of our framework are that: (i) it reduces the design effort, (ii) it improves quality systems obtained, (iii) it is independent of platforms, (iv) it facilitates the use of data mining techniques on geo-referenced data, and finally, (v) it ameliorates the communication between different users.