896 resultados para computation- and data-intensive applications


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a data-intensive architecture that demonstrates the ability to support applications from a wide range of application domains, and support the different types of users involved in defining, designing and executing data-intensive processing tasks. The prototype architecture is introduced, and the pivotal role of DISPEL as a canonical language is explained. The architecture promotes the exploration and exploitation of distributed and heterogeneous data and spans the complete knowledge discovery process, from data preparation, to analysis, to evaluation and reiteration. The architecture evaluation included large-scale applications from astronomy, cosmology, hydrology, functional genetics, imaging processing and seismology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Transportation Department, Research and Special Programs Directorate, Washington, D.C.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Population measures for genetic programs are defined and analysed in an attempt to better understand the behaviour of genetic programming. Some measures are simple, but do not provide sufficient insight. The more meaningful ones are complex and take extra computation time. Here we present a unified view on the computation of population measures through an information hypertree (iTree). The iTree allows for a unified and efficient calculation of population measures via a basic tree traversal. © Springer-Verlag 2004.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

* The work is partially supported by Grant no. NIP917 of the Ministry of Science and Education – Republic of Bulgaria.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The term Artificial intelligence acquired a lot of baggage since its introduction and in its current incarnation is synonymous with Deep Learning. The sudden availability of data and computing resources has opened the gates to myriads of applications. Not all are created equal though, and problems might arise especially for fields not closely related to the tasks that pertain tech companies that spearheaded DL. The perspective of practitioners seems to be changing, however. Human-Centric AI emerged in the last few years as a new way of thinking DL and AI applications from the ground up, with a special attention at their relationship with humans. The goal is designing a system that can gracefully integrate in already established workflows, as in many real-world scenarios AI may not be good enough to completely replace its humans. Often this replacement may even be unneeded or undesirable. Another important perspective comes from, Andrew Ng, a DL pioneer, who recently started shifting the focus of development from “better models” towards better, and smaller, data. He defined his approach Data-Centric AI. Without downplaying the importance of pushing the state of the art in DL, we must recognize that if the goal is creating a tool for humans to use, more raw performance may not align with more utility for the final user. A Human-Centric approach is compatible with a Data-Centric one, and we find that the two overlap nicely when human expertise is used as the driving force behind data quality. This thesis documents a series of case-studies where these approaches were employed, to different extents, to guide the design and implementation of intelligent systems. We found human expertise proved crucial in improving datasets and models. The last chapter includes a slight deviation, with studies on the pandemic, still preserving the human and data centric perspective.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The new technologies for Knowledge Discovery from Databases (KDD) and data mining promise to bring new insights into a voluminous growing amount of biological data. KDD technology is complementary to laboratory experimentation and helps speed up biological research. This article contains an introduction to KDD, a review of data mining tools, and their biological applications. We discuss the domain concepts related to biological data and databases, as well as current KDD and data mining developments in biology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this and a preceding paper, we provide an introduction to the Fujitsu VPP range of vector-parallel supercomputers and to some of the computational chemistry software available for the VPP. Here, we consider the implementation and performance of seven popular chemistry application packages. The codes discussed range from classical molecular dynamics to semiempirical and ab initio quantum chemistry. All have evolved from sequential codes, and have typically been parallelised using a replicated data approach. As such they are well suited to the large-memory/fast-processor architecture of the VPP. For one code, CASTEP, a distributed-memory data-driven parallelisation scheme is presented. (C) 2000 Published by Elsevier Science B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The progressive aging of the population requires new kinds of social and medical intervention and the availability of different services provided to the elder population. New applications have been developed and some services are now provided at home, allowing the older people to stay home instead of having to stay in hospitals. But an adequate response to the needs of the users will imply a high percentage of use of personal data and information, including the building up and maintenance of user profiles, feeding the systems with the data and information needed for a proactive intervention in scheduling of events in which the user may be involved. Fundamental Rights may be at stake, so a legal analysis must also be considered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The MAP-i Doctoral Program of the Universities of Minho, Aveiro and Porto

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pharmacogenomics is a field with origins in the study of monogenic variations in drug metabolism in the 1950s. Perhaps because of these historical underpinnings, there has been an intensive investigation of 'hepatic pharmacogenes' such as CYP450s and liver drug metabolism using pharmacogenomics approaches over the past five decades. Surprisingly, kidney pathophysiology, attendant diseases and treatment outcomes have been vastly under-studied and under-theorized despite their central importance in maintenance of health, susceptibility to disease and rational personalized therapeutics. Indeed, chronic kidney disease (CKD) represents an increasing public health burden worldwide, both in developed and developing countries. Patients with CKD suffer from high cardiovascular morbidity and mortality, which is mainly attributable to cardiovascular events before reaching end-stage renal disease. In this paper, we focus our analyses on renal function before end-stage renal disease, as seen through the lens of pharmacogenomics and human genomic variation. We herein synthesize the recent evidence linking selected Very Important Pharmacogenes (VIP) to renal function, blood pressure and salt-sensitivity in humans, and ways in which these insights might inform rational personalized therapeutics. Notably, we highlight and present the rationale for three applications that we consider as important and actionable therapeutic and preventive focus areas in renal pharmacogenomics: 1) ACE inhibitors, as a confirmed application, 2) VDR agonists, as a promising application, and 3) moderate dietary salt intake, as a suggested novel application. Additionally, we emphasize the putative contributions of gene-environment interactions, discuss the implications of these findings to treat and prevent hypertension and CKD. Finally, we conclude with a strategic agenda and vision required to accelerate advances in this under-studied field of renal pharmacogenomics with vast significance for global public health.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Microarray transcript profiling and RNA interference are two new technologies crucial for large-scale gene function studies in multicellular eukaryotes. Both rely on sequence-specific hybridization between complementary nucleic acid strands, inciting us to create a collection of gene-specific sequence tags (GSTs) representing at least 21,500 Arabidopsis genes and which are compatible with both approaches. The GSTs were carefully selected to ensure that each of them shared no significant similarity with any other region in the Arabidopsis genome. They were synthesized by PCR amplification from genomic DNA. Spotted microarrays fabricated from the GSTs show good dynamic range, specificity, and sensitivity in transcript profiling experiments. The GSTs have also been transferred to bacterial plasmid vectors via recombinational cloning protocols. These cloned GSTs constitute the ideal starting point for a variety of functional approaches, including reverse genetics. We have subcloned GSTs on a large scale into vectors designed for gene silencing in plant cells. We show that in planta expression of GST hairpin RNA results in the expected phenotypes in silenced Arabidopsis lines. These versatile GST resources provide novel and powerful tools for functional genomics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The application of two approaches for high-throughput, high-resolution X-ray phase contrast tomographic imaging being used at the tomographic microscopy and coherent radiology experiments (TOMCAT) beamline of the SLS is discussed and illustrated. Differential phase contrast (DPC) imaging, using a grating interferometer and a phase-stepping technique, is integrated into the beamline environment at TOMCAT in terms of the fast acquisition and reconstruction of data and the availability to scan samples within an aqueous environment. A second phase contrast method is a modified transfer of intensity approach that can yield the 3D distribution of the decrement of the refractive index of a weakly absorbing object from a single tomographic dataset. The two methods are complementary to one another: the DPC method is characterised by a higher sensitivity and by moderate resolution with larger samples; the modified transfer of intensity approach is particularly suited for small specimens when high resolution (around 1 mu m) is required. Both are being applied to investigations in the biological and materials science fields.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Es defineix l'expansió general d'operadors com una combinació lineal de projectors i s'exposa la seva aplicació generalitzada al càlcul d'integrals moleculars. Com a exemple numèric, es fa l'aplicació al càlcul d'integrals de repulsió electrònica entre quatre funcions de tipus s centrades en punts diferents, i es mostren tant resultats del càlcul com la definició d'escalat respecte a un valor de referència, que facilitarà el procés d'optimització de l'expansió per uns paràmetres arbitraris. Es donen resultats ajustats al valor exacte

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Floods are the natural hazards that produce the highest number of casualties and material damage in the Western Mediterranean. An improvement in flood risk assessment and study of a possible increase in flooding occurrence are therefore needed. To carry out these tasks it is important to have at our disposal extensive knowledge on historical floods and to find an efficient way to manage this geographical data. In this paper we present a complete flood database spanning the 20th century for the whole of Catalonia (NE Spain), which includes documentary information (affected areas and damage) and instrumental information (meteorological and hydrological records). This geodatabase, named Inungama, has been implemented on a GIS (Geographical Information System) in order to display all the information within a given geographical scenario, as well as to carry out an analysis thereof using queries, overlays and calculus. Following a description of the type and amount of information stored in the database and the structure of the information system, the first applications of Inungama are presented. The geographical distribution of floods shows the localities which are more likely to be flooded, confirming that the most affected municipalities are the most densely populated ones in coastal areas. Regarding the existence of an increase in flooding occurrence, a temporal analysis has been carried out, showing a steady increase over the last 30 years.