6 resultados para Warehouses.

em Universidad de Alicante


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data mining is one of the most important analysis techniques to automatically extract knowledge from large amount of data. Nowadays, data mining is based on low-level specifications of the employed techniques typically bounded to a specific analysis platform. Therefore, data mining lacks a modelling architecture that allows analysts to consider it as a truly software-engineering process. Bearing in mind this situation, we propose a model-driven approach which is based on (i) a conceptual modelling framework for data mining, and (ii) a set of model transformations to automatically generate both the data under analysis (that is deployed via data-warehousing technology) and the analysis models for data mining (tailored to a specific platform). Thus, analysts can concentrate on understanding the analysis problem via conceptual data-mining models instead of wasting efforts on low-level programming tasks related to the underlying-platform technical details. These time consuming tasks are now entrusted to the model-transformations scaffolding. The feasibility of our approach is shown by means of a hypothetical data-mining scenario where a time series analysis is required.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Currently there are an overwhelming number of scientific publications in Life Sciences, especially in Genetics and Biotechnology. This huge amount of information is structured in corporate Data Warehouses (DW) or in Biological Databases (e.g. UniProt, RCSB Protein Data Bank, CEREALAB or GenBank), whose main drawback is its cost of updating that makes it obsolete easily. However, these Databases are the main tool for enterprises when they want to update their internal information, for example when a plant breeder enterprise needs to enrich its genetic information (internal structured Database) with recently discovered genes related to specific phenotypic traits (external unstructured data) in order to choose the desired parentals for breeding programs. In this paper, we propose to complement the internal information with external data from the Web using Question Answering (QA) techniques. We go a step further by providing a complete framework for integrating unstructured and structured information by combining traditional Databases and DW architectures with QA systems. The great advantage of our framework is that decision makers can compare instantaneously internal data with external data from competitors, thereby allowing taking quick strategic decisions based on richer data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Decision support systems (DSS) support business or organizational decision-making activities, which require the access to information that is internally stored in databases or data warehouses, and externally in the Web accessed by Information Retrieval (IR) or Question Answering (QA) systems. Graphical interfaces to query these sources of information ease to constrain dynamically query formulation based on user selections, but they present a lack of flexibility in query formulation, since the expressivity power is reduced to the user interface design. Natural language interfaces (NLI) are expected as the optimal solution. However, especially for non-expert users, a real natural communication is the most difficult to realize effectively. In this paper, we propose an NLI that improves the interaction between the user and the DSS by means of referencing previous questions or their answers (i.e. anaphora such as the pronoun reference in “What traits are affected by them?”), or by eliding parts of the question (i.e. ellipsis such as “And to glume colour?” after the question “Tell me the QTLs related to awn colour in wheat”). Moreover, in order to overcome one of the main problems of NLIs about the difficulty to adapt an NLI to a new domain, our proposal is based on ontologies that are obtained semi-automatically from a framework that allows the integration of internal and external, structured and unstructured information. Therefore, our proposal can interface with databases, data warehouses, QA and IR systems. Because of the high NL ambiguity of the resolution process, our proposal is presented as an authoring tool that helps the user to query efficiently in natural language. Finally, our proposal is tested on a DSS case scenario about Biotechnology and Agriculture, whose knowledge base is the CEREALAB database as internal structured data, and the Web (e.g. PubMed) as external unstructured information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

On a global level the population growth and increase of the middle class lead to a growing demand on material resources. The built environment has an enormous impact on this scarcity. In addition, a surplus of construction and demolition waste is a common problem. The construction industry claims to recycle 95% of this waste but this is in fact mainly downcycling. Towards the circular economy, the quality of reuse becomes of increasing importance. Buildings are material warehouses that can contribute to this high quality reuse. However, several aspects to achieve this are unknown and a need for more insight into the potential for high quality reuse of building materials exists. Therefore an instrument has been developed that determines the circularity of construction waste in order to maximise high quality reuse. The instrument is based on three principles: ‘product and material flows in the end of life phase’, ‘future value of secondary materials and products’ and ‘the success of repetition in a new life cycle’. These principles are further divided into a number of criteria to which values and weighting factors are assigned. A degree of circularity can then be determined as a percentage. A case study for a typical 70s building is carried out. For concrete, the circularity is increased from 25% to 50% by mapping out the potential for high quality reuse. During the development of the instrument it was clarified that some criteria are difficult to measure. Accurate and reliable data are limited and assumptions had to be made. To increase the reliability of the instrument, experts have reviewed the instrument several times. In the long-term, the instrument can be used as a tool for quantitative research to reduce the amount of construction and demolition waste and contribute to the reduction of raw material scarcity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

After the construction of the San Carlos bastion in Alicante in the final decade of the seventeenth century, and the great trench which the English built around the district of San Francisco during their years of dominance in the War of Succession, the waters of the San Blas gully caused serious damage to these fortifications of the city and to the trade buildings of the port. In 1772, the diversion canal was built. It was designed to divert the riverbed of the gully and send the waters directly to the sea. The project had been initially designed by the Engineer General, Jorge Próspero de Verboom in 1721. This unique work of engineering had some defects, principally in the breakwater which prevented the waters from flowing down the former river course. On several occasions, the water returned to its original riverbed due to the weakness of the breakwater, the narrowness of the channel’s bed and its lack of regularisation, causing serious damage to the bastion, the Babel-facing façade, the traders’ warehouses and other buildings. This study describes the project that the military engineer Leandro Badarán carried out in 1794 in order to technically improve this canal and examines his report on the state of the fortifications. Similar works built in Spain are also explained. It also analyses the repeated disputes between the war department and the port throughout these years over finding a technical solution to the problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The majority of the organizations store their historical business information in data warehouses which are queried to make strategic decisions by using online analytical processing (OLAP) tools. This information has to be correctly assured against unauthorized accesses, but nevertheless there are a great amount of legacy OLAP applications that have been developed without considering security aspects or these have been incorporated once the system was implemented. This work defines a reverse engineering process that allows us to obtain the conceptual model corresponding to a legacy OLAP application, and also analyses and represents the security aspects that could have established. This process has been aligned with a model-driven architecture for developing secure OLAP applications by defining the transformations needed to automatically apply it. Once the conceptual model has been extracted, it can be easily modified and improved with security, and automatically transformed to generate the new implementation.