778 resultados para Scientific computing
Resumo:
The fast development and wide application of digital methods, combined with broadened access to the Internet and falling computing costs, have created intense interest in electronic presentation and access to cultural and scientific heritage resources. Information technologies have offered cultural institutions new opportunities for the presentation of their holdings, which are now made accessible not only to the specialists, but also to the citizens and interested parties worldwide. The paper presents an overview of the Bulgarian experience in the field of digital preservation and access and on-going work on the project “Knowledge Transfer for the Digitisation of Scientific and Cultural Heritage to Bulgaria” (MTKD-CT-2004-509754) supported by the Marie Curie programme of the FP6 of the EC.
Resumo:
ACM Computing Classification System (1998): G.2.2.
Resumo:
Computing and information technology have made significant advances. The use of computing and technology is a major aspect of our lives, and this use will only continue to increase in our lifetime. Electronic digital computers and high performance communication networks are central to contemporary information technology. The computing applications in a wide range of areas including business, communications, medical research, transportation, entertainments, and education are transforming local and global societies around the globe. The rapid changes in the fields of computing and information technology also make the study of ethics exciting and challenging, as nearly every day, the media report on a new invention, controversy, or court ruling. This tutorial will explore a broad overview on the scientific foundations, technological advances, social implications, and ethical and legal issues related to computing. It will provide the milestones in computing and in networking, social context of computing, professional and ethical responsibilities, philosophical frameworks, and social, ethical, historical, and political implications of computer and information technology. It will outline the impact of the tremendous growth of computer and information technology on people, ethics and law. Political and legal implications will become clear when we analyze how technology has outpaced the legal and political arenas.
Resumo:
Néhány éve vonult be a köztudatba a cloud computing fogalom, mely ma már a szakirodalomban és az informatikai alkalmazásokban is egyre nagyobb teret foglal el. Ez az új IT-technológia a számítási felhő számítástechnikai szolgáltatásaihoz kapcsolódó ERP-rendszerek szabványosítását, elterjedését eredményezi. A szerzők cikkükben áttekintést adnak a cloud computing mai helyzetéről és a számítási felhőben működő adatfeldolgozó rendszerekkel (kiemelten ERP) kapcsolatos felhasználói elvárásokról, illetve kezdeti, németországi alkalmazási tapasztalatokról. Külön tárgyalják az ERP-rendszerek új kiválasztási céljait és kritériumait, melyek a felhőkörnyezet speciális lehetőségei miatt alakultak ki. _____ The concept of ‘Cloud’ as an IT notion emerged in the past years and proliferated within the business and IT professional community. The concept of cloud gained awareness both in the professional and scientific literature and in the practice of IT/IS world. The cloud has a profound impact on the Business Information Systems, especially on ERP systems. Indirectly, the cloud leads to a massive standardization on ERP systems and their services. In this paper, the authors provide a literature overview about the current situation of Cloud Computing and the requirements established by end-users against the other data processing facilities and systems, outstandingly the ERP systems. The majority of investigated cases are based on samples from Germany. Furthermore, the initial experiences of application are discussed. Separately, the recent selection objectives and criteria for ERP systems are investigated that came into existence because the appearance of Cloud in the IT environment.
Resumo:
This dissertation presents and evaluates a methodology for scheduling medical application workloads in virtualized computing environments. Such environments are being widely adopted by providers of "cloud computing" services. In the context of provisioning resources for medical applications, such environments allow users to deploy applications on distributed computing resources while keeping their data secure. Furthermore, higher level services that further abstract the infrastructure-related issues can be built on top of such infrastructures. For example, a medical imaging service can allow medical professionals to process their data in the cloud, easing them from the burden of having to deploy and manage these resources themselves. In this work, we focus on issues related to scheduling scientific workloads on virtualized environments. We build upon the knowledge base of traditional parallel job scheduling to address the specific case of medical applications while harnessing the benefits afforded by virtualization technology. To this end, we provide the following contributions: (1) An in-depth analysis of the execution characteristics of the target applications when run in virtualized environments. (2) A performance prediction methodology applicable to the target environment. (3) A scheduling algorithm that harnesses application knowledge and virtualization-related benefits to provide strong scheduling performance and quality of service guarantees. In the process of addressing these pertinent issues for our target user base (i.e. medical professionals and researchers), we provide insight that benefits a large community of scientific application users in industry and academia. Our execution time prediction and scheduling methodologies are implemented and evaluated on a real system running popular scientific applications. We find that we are able to predict the execution time of a number of these applications with an average error of 15%. Our scheduling methodology, which is tested with medical image processing workloads, is compared to that of two baseline scheduling solutions and we find that it outperforms them in terms of both the number of jobs processed and resource utilization by 20–30%, without violating any deadlines. We conclude that our solution is a viable approach to supporting the computational needs of medical users, even if the cloud computing paradigm is not widely adopted in its current form.
Resumo:
Cloud computing enables independent end users and applications to share data and pooled resources, possibly located in geographically distributed Data Centers, in a fully transparent way. This need is particularly felt by scientific applications to exploit distributed resources in efficient and scalable way for the processing of big amount of data. This paper proposes an open so- lution to deploy a Platform as a service (PaaS) over a set of multi- site data centers by applying open source virtualization tools to facilitate operation among virtual machines while optimizing the usage of distributed resources. An experimental testbed is set up in Openstack environment to obtain evaluations with different types of TCP sample connections to demonstrate the functionality of the proposed solution and to obtain throughput measurements in relation to relevant design parameters.
Resumo:
Postprint
Resumo:
Salman, M. et al. (2016). Integrating Scientific Publication into an Applied Gaming Ecosystem. GSTF Journal on Computing (JoC), Volume 5 (Issue 1), pp. 45-51.
Resumo:
How can applications be deployed on the cloud to achieve maximum performance? This question is challenging to address with the availability of a wide variety of cloud Virtual Machines (VMs) with different performance capabilities. The research reported in this paper addresses the above question by proposing a six step benchmarking methodology in which a user provides a set of weights that indicate how important memory, local communication, computation and storage related operations are to an application. The user can either provide a set of four abstract weights or eight fine grain weights based on the knowledge of the application. The weights along with benchmarking data collected from the cloud are used to generate a set of two rankings - one based only on the performance of the VMs and the other takes both performance and costs into account. The rankings are validated on three case study applications using two validation techniques. The case studies on a set of experimental VMs highlight that maximum performance can be achieved by the three top ranked VMs and maximum performance in a cost-effective manner is achieved by at least one of the top three ranked VMs produced by the methodology.
Resumo:
Scientific workflows orchestrate the execution of complex experiments frequently using distributed computing platforms. Meta-workflows represent an emerging type of such workflows which aim to reuse existing workflows from potentially different workflow systems to achieve more complex and experimentation minimizing workflow design and testing efforts. Workflow interoperability plays a profound role in achieving this objective. This paper is focused at fostering interoperability across meta-workflows that combine workflows of different workflow systems from diverse scientific domains. This is achieved by formalizing definitions of meta-workflow and its different types to standardize their data structures used to describe workflows to be published and shared via public repositories. The paper also includes thorough formalization of two workflow interoperability approaches based on this formal description: the coarse-grained and fine-grained workflow interoperability approach. The paper presents a case study from Astrophysics which successfully demonstrates the use of the concepts of meta-workflows and workflow interoperability within a scientific simulation platform.
Resumo:
Early definitions of Smart Building focused almost entirely on the technology aspect and did not suggest user interaction at all. Indeed, today we would attribute it more to the concept of the automated building. In this sense, control of comfort conditions inside buildings is a problem that is being well investigated, since it has a direct effect on users’ productivity and an indirect effect on energy saving. Therefore, from the users’ perspective, a typical environment can be considered comfortable, if it’s capable of providing adequate thermal comfort, visual comfort and indoor air quality conditions and acoustic comfort. In the last years, the scientific community has dealt with many challenges, especially from a technological point of view. For instance, smart sensing devices, the internet, and communication technologies have enabled a new paradigm called Edge computing that brings computation and data storage closer to the location where it is needed, to improve response times and save bandwidth. This has allowed us to improve services, sustainability and decision making. Many solutions have been implemented such as smart classrooms, controlling the thermal condition of the building, monitoring HVAC data for energy-efficient of the campus and so forth. Though these projects provide to the realization of smart campus, a framework for smart campus is yet to be determined. These new technologies have also introduced new research challenges: within this thesis work, some of the principal open challenges will be faced, proposing a new conceptual framework, technologies and tools to move forward the actual implementation of smart campuses. Keeping in mind, several problems known in the literature have been investigated: the occupancy detection, noise monitoring for acoustic comfort, context awareness inside the building, wayfinding indoor, strategic deployment for air quality and books preserving.
Resumo:
The present Thesis reports on the various research projects to which I have contributed during my PhD period, working with several research groups, and whose results have been communicated in a number of scientific publications. The main focus of my research activity was to learn, test, exploit and extend the recently developed vdW-DFT (van der Waals corrected Density Functional Theory) methods for computing the structural, vibrational and electronic properties of ordered molecular crystals from first principles. A secondary, and more recent, research activity has been the analysis with microelectrostatic methods of Molecular Dynamics (MD) simulations of disordered molecular systems. While only very unreliable methods based on empirical models were practically usable until a few years ago, accurate calculations of the crystal energy are now possible, thanks to very fast modern computers and to the excellent performance of the best vdW-DFT methods. Accurate energies are particularly important for describing organic molecular solids, since they often exhibit several alternative crystal structures (polymorphs), with very different packing arrangements but very small energy differences. Standard DFT methods do not describe the long-range electron correlations which give rise to the vdW interactions. Although weak, these interactions are extremely sensitive to the packing arrangement, and neglecting them used to be a problem. The calculations of reliable crystal structures and vibrational frequencies has been made possible only recently, thanks to development of some good representations of the vdW contribution to the energy (known as “vdW corrections”).
Resumo:
Modern scientific discoveries are driven by an unsatisfiable demand for computational resources. High-Performance Computing (HPC) systems are an aggregation of computing power to deliver considerably higher performance than one typical desktop computer can provide, to solve large problems in science, engineering, or business. An HPC room in the datacenter is a complex controlled environment that hosts thousands of computing nodes that consume electrical power in the range of megawatts, which gets completely transformed into heat. Although a datacenter contains sophisticated cooling systems, our studies indicate quantitative evidence of thermal bottlenecks in real-life production workload, showing the presence of significant spatial and temporal thermal and power heterogeneity. Therefore minor thermal issues/anomalies can potentially start a chain of events that leads to an unbalance between the amount of heat generated by the computing nodes and the heat removed by the cooling system originating thermal hazards. Although thermal anomalies are rare events, anomaly detection/prediction in time is vital to avoid IT and facility equipment damage and outage of the datacenter, with severe societal and business losses. For this reason, automated approaches to detect thermal anomalies in datacenters have considerable potential. This thesis analyzed and characterized the power and thermal characteristics of a Tier0 datacenter (CINECA) during production and under abnormal thermal conditions. Then, a Deep Learning (DL)-powered thermal hazard prediction framework is proposed. The proposed models are validated against real thermal hazard events reported for the studied HPC cluster while in production. This thesis is the first empirical study of thermal anomaly detection and prediction techniques of a real large-scale HPC system to the best of my knowledge. For this thesis, I used a large-scale dataset, monitoring data of tens of thousands of sensors for around 24 months with a data collection rate of around 20 seconds.
Resumo:
One of the main practical implications of quantum mechanical theory is quantum computing, and therefore the quantum computer. Quantum computing (for example, with Shor’s algorithm) challenges the computational hardness assumptions, such as the factoring problem and the discrete logarithm problem, that anchor the safety of cryptosystems. So the scientific community is studying how to defend cryptography; there are two defense strategies: the quantum cryptography (which involves the use of quantum cryptographic algorithms on quantum computers) and the post-quantum cryptography (based on classical cryptographic algorithms, but resistant to quantum computers). For example, National Institute of Standards and Technology (NIST) is collecting and standardizing the post-quantum ciphers, as it established DES and AES as symmetric cipher standards, in the past. In this thesis an introduction on quantum mechanics was given, in order to be able to talk about quantum computing and to analyze Shor’s algorithm. The differences between quantum and post-quantum cryptography were then analyzed. Subsequently the focus was given to the mathematical problems assumed to be resistant to quantum computers. To conclude, post-quantum digital signature cryptographic algorithms selected by NIST were studied and compared in order to apply them in today’s life.