892 resultados para Parallel computing, Virtual machine, Composition, Determinism, Abstraction


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Word Sense Disambiguation, the process of identifying the meaning of a word in a sentence when the word has multiple meanings, is a critical problem of machine translation. It is generally very difficult to select the correct meaning of a word in a sentence, especially when the syntactical difference between the source and target language is big, e.g., English-Korean machine translation. To achieve a high level of accuracy of noun sense selection in machine translation, we introduced a statistical method based on co-occurrence relation of words in sentences and applied it to the English-Korean machine translator RyongNamSan. ACM Computing Classification System (1998): I.2.7.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Report published in the Proceedings of the National Conference on "Education in the Information Society", Plovdiv, May, 2013

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Report published in the Proceedings of the National Conference on "Education in the Information Society", Plovdiv, May, 2013

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper presents a study that focuses on the issue of sup-porting educational experts to choose the right combination of educational methodology and technology tools when designing training and learning programs. It is based on research in the field of adaptive intelligent e-learning systems. The object of study is the professional growth of teachers in technology and in particular that part of their qualification which is achieved by organizing targeted training of teachers. The article presents the process of creating and testing a system to support the decision on the design of training for teachers, leading to more effective implementation of technology in education and integration in diverse educational contexts. ACM Computing Classification System (1998): H.4.2, I.2.1, I.2, I.2.4, F.4.1.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ACM Computing Classification System (1998): D.2.11, D.1.3, D.3.1, J.3, C.2.4.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Big data comes in various ways, types, shapes, forms and sizes. Indeed, almost all areas of science, technology, medicine, public health, economics, business, linguistics and social science are bombarded by ever increasing flows of data begging to be analyzed efficiently and effectively. In this paper, we propose a rough idea of a possible taxonomy of big data, along with some of the most commonly used tools for handling each particular category of bigness. The dimensionality p of the input space and the sample size n are usually the main ingredients in the characterization of data bigness. The specific statistical machine learning technique used to handle a particular big data set will depend on which category it falls in within the bigness taxonomy. Large p small n data sets for instance require a different set of tools from the large n small p variety. Among other tools, we discuss Preprocessing, Standardization, Imputation, Projection, Regularization, Penalization, Compression, Reduction, Selection, Kernelization, Hybridization, Parallelization, Aggregation, Randomization, Replication, Sequentialization. Indeed, it is important to emphasize right away that the so-called no free lunch theorem applies here, in the sense that there is no universally superior method that outperforms all other methods on all categories of bigness. It is also important to stress the fact that simplicity in the sense of Ockham’s razor non-plurality principle of parsimony tends to reign supreme when it comes to massive data. We conclude with a comparison of the predictive performance of some of the most commonly used methods on a few data sets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a PC-based mainframe computer emulator called VisibleZ and its use in teaching mainframe Computer Organization and Assembly Programming classes. VisibleZ models IBM’s z/Architecture and allows direct interpretation of mainframe assembly language object code in a graphical user interface environment that was developed in Java. The VisibleZ emulator acts as an interactive visualization tool to simulate enterprise computer architecture. The provided architectural components include main storage, CPU, registers, Program Status Word (PSW), and I/O Channels. Particular attention is given to providing visual clues to the user by color-coding screen components, machine instruction execution, and animation of the machine architecture components. Students interact with VisibleZ by executing machine instructions in a step-by-step mode, simultaneously observing the contents of memory, registers, and changes in the PSW during the fetch-decode-execute machine instruction cycle. The object-oriented design and implementation of VisibleZ allows students to develop their own instruction semantics by coding Java for existing specific z/Architecture machine instructions or design and implement new machine instructions. The use of VisibleZ in lectures, labs, and assignments is described in the paper and supported by a website that hosts an extensive collection of related materials. VisibleZ has been proven a useful tool in mainframe Assembly Language Programming and Computer Organization classes. Using VisibleZ, students develop a better understanding of mainframe concepts, components, and how the mainframe computer works. ACM Computing Classification System (1998): C.0, K.3.2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Volunteered Service Composition (VSC) refers to the process of composing volunteered services and resources. These services are typically published to a pool of voluntary resources. Selection and composition decisions tend to encounter numerous uncertainties: service consumers and applications have little control of these services and tend to be uncertain about their level of support for the desired functionalities and non-functionalities. In this paper, we contribute to a self-awareness framework that implements two levels of awareness, Stimulus-awareness and Time-awareness. The former responds to basic changes in the environment while the latter takes into consideration the historical performance of the services. We have used volunteer service computing as an example to demonstrate the benefits that self-awareness can introduce to self-adaptation. We have compared the Stimulus-and Time-awareness approaches with a recent Ranking approach from the literature. The results show that the Time-awareness level has the advantage of satisfying higher number of requests with lower time cost.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Femtosecond laser microfabrication has emerged over the last decade as a 3D flexible technology in photonics. Numerical simulations provide an important insight into spatial and temporal beam and pulse shaping during the course of extremely intricate nonlinear propagation (see e.g. [1,2]). Electromagnetics of such propagation is typically described in the form of the generalized Non-Linear Schrdinger Equation (NLSE) coupled with Drude model for plasma [3]. In this paper we consider a multi-threaded parallel numerical solution for a specific model which describes femtosecond laser pulse propagation in transparent media [4, 5]. However our approach can be extended to similar models. The numerical code is implemented in NVIDIA Graphics Processing Unit (GPU) which provides an effitient hardware platform for multi-threded computing. We compare the performance of the described below parallel code implementated for GPU using CUDA programming interface [3] with a serial CPU version used in our previous papers [4,5]. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we evaluate and compare two representativeand popular distributed processing engines for large scalebig data analytics, Spark and graph based engine GraphLab. Wedesign a benchmark suite including representative algorithmsand datasets to compare the performances of the computingengines, from performance aspects of running time, memory andCPU usage, network and I/O overhead. The benchmark suite istested on both local computer cluster and virtual machines oncloud. By varying the number of computers and memory weexamine the scalability of the computing engines with increasingcomputing resources (such as CPU and memory). We also runcross-evaluation of generic and graph based analytic algorithmsover graph processing and generic platforms to identify thepotential performance degradation if only one processing engineis available. It is observed that both computing engines showgood scalability with increase of computing resources. WhileGraphLab largely outperforms Spark for graph algorithms, ithas close running time performance as Spark for non-graphalgorithms. Additionally the running time with Spark for graphalgorithms over cloud virtual machines is observed to increaseby almost 100% compared to over local computer clusters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a new thermography-based maximum power point tracking (MPPT) scheme to address photovoltaic (PV) partial shading faults. Solar power generation utilizes a large number of PV cells connected in series and in parallel in an array, and that are physically distributed across a large field. When a PV module is faulted or partial shading occurs, the PV system sees a nonuniform distribution of generated electrical power and thermal profile, and the generation of multiple maximum power points (MPPs). If left untreated, this reduces the overall power generation and severe faults may propagate, resulting in damage to the system. In this paper, a thermal camera is employed for fault detection and a new MPPT scheme is developed to alter the operating point to match an optimized MPP. Extensive data mining is conducted on the images from the thermal camera in order to locate global MPPs. Based on this, a virtual MPPT is set out to find the global MPP. This can reduce MPPT time and be used to calculate the MPP reference voltage. Finally, the proposed methodology is experimentally implemented and validated by tests on a 600-W PV array.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A rich collection of Heteroptera extracted with Berlese funnel by Dr. I. Loksa between 1953–1974 in Hungary, has been examined. Altogether 157 true bug species have been identified. The great majority of them have been found in very low number, there are only 27 species of which more than 10 adult individuals have been found. Some species considered to be rare or very rare in Hungary have been collected in relatively great number (Ceratocombus coleoptratus, Cryptostemma pusillimum, C. waltli, Acalypta carinata, A. platycheila, Loricula ruficeps, Myrmedobia exilis). The three families, which are more or less rich in species and have the highest ratio of extracted species, were Rhyparochromidae, Tingidae and Nabidae. Out of them, the family Rhyparochromidae has been found to be most diverse and most characteristic at the ground-level. Individuals of the families Tingidae, Hebridae and Rhyparochromidae have been found in greatest number. The occurrence of the lace bug Campylosteira orientalis Horváth, 1881 in Hungary has been verified by a voucher specimen. In respect to the environmental changes through the country, parallel changes have been observed in the zoogeographical distribution of the ground-living bugs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The total time a customer spends in the business process system, called the customer cycle-time, is a major contributor to overall customer satisfaction. Business process analysts and designers are frequently asked to design process solutions with optimal performance. Simulation models have been very popular to quantitatively evaluate the business processes; however, simulation is time-consuming and it also requires extensive modeling experiences to develop simulation models. Moreover, simulation models neither provide recommendations nor yield optimal solutions for business process design. A queueing network model is a good analytical approach toward business process analysis and design, and can provide a useful abstraction of a business process. However, the existing queueing network models were developed based on telephone systems or applied to manufacturing processes in which machine servers dominate the system. In a business process, the servers are usually people. The characteristics of human servers should be taken into account by the queueing model, i.e. specialization and coordination. ^ The research described in this dissertation develops an open queueing network model to do a quick analysis of business processes. Additionally, optimization models are developed to provide optimal business process designs. The queueing network model extends and improves upon existing multi-class open-queueing network models (MOQN) so that the customer flow in the human-server oriented processes can be modeled. The optimization models help business process designers to find the optimal design of a business process with consideration of specialization and coordination. ^ The main findings of the research are, first, parallelization can reduce the cycle-time for those customer classes that require more than one parallel activity; however, the coordination time due to the parallelization overwhelms the savings from parallelization under the high utilization servers since the waiting time significantly increases, thus the cycle-time increases. Third, the level of industrial technology employed by a company and coordination time to mange the tasks have strongest impact on the business process design; as the level of industrial technology employed by the company is high; more division is required to improve the cycle-time; as the coordination time required is high; consolidation is required to improve the cycle-time. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation presents and evaluates a methodology for scheduling medical application workloads in virtualized computing environments. Such environments are being widely adopted by providers of "cloud computing" services. In the context of provisioning resources for medical applications, such environments allow users to deploy applications on distributed computing resources while keeping their data secure. Furthermore, higher level services that further abstract the infrastructure-related issues can be built on top of such infrastructures. For example, a medical imaging service can allow medical professionals to process their data in the cloud, easing them from the burden of having to deploy and manage these resources themselves. In this work, we focus on issues related to scheduling scientific workloads on virtualized environments. We build upon the knowledge base of traditional parallel job scheduling to address the specific case of medical applications while harnessing the benefits afforded by virtualization technology. To this end, we provide the following contributions: (1) An in-depth analysis of the execution characteristics of the target applications when run in virtualized environments. (2) A performance prediction methodology applicable to the target environment. (3) A scheduling algorithm that harnesses application knowledge and virtualization-related benefits to provide strong scheduling performance and quality of service guarantees. In the process of addressing these pertinent issues for our target user base (i.e. medical professionals and researchers), we provide insight that benefits a large community of scientific application users in industry and academia. Our execution time prediction and scheduling methodologies are implemented and evaluated on a real system running popular scientific applications. We find that we are able to predict the execution time of a number of these applications with an average error of 15%. Our scheduling methodology, which is tested with medical image processing workloads, is compared to that of two baseline scheduling solutions and we find that it outperforms them in terms of both the number of jobs processed and resource utilization by 20–30%, without violating any deadlines. We conclude that our solution is a viable approach to supporting the computational needs of medical users, even if the cloud computing paradigm is not widely adopted in its current form.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the past few decades, we have been enjoying tremendous benefits thanks to the revolutionary advancement of computing systems, driven mainly by the remarkable semiconductor technology scaling and the increasingly complicated processor architecture. However, the exponentially increased transistor density has directly led to exponentially increased power consumption and dramatically elevated system temperature, which not only adversely impacts the system's cost, performance and reliability, but also increases the leakage and thus the overall power consumption. Today, the power and thermal issues have posed enormous challenges and threaten to slow down the continuous evolvement of computer technology. Effective power/thermal-aware design techniques are urgently demanded, at all design abstraction levels, from the circuit-level, the logic-level, to the architectural-level and the system-level. ^ In this dissertation, we present our research efforts to employ real-time scheduling techniques to solve the resource-constrained power/thermal-aware, design-optimization problems. In our research, we developed a set of simple yet accurate system-level models to capture the processor's thermal dynamic as well as the interdependency of leakage power consumption, temperature, and supply voltage. Based on these models, we investigated the fundamental principles in power/thermal-aware scheduling, and developed real-time scheduling techniques targeting at a variety of design objectives, including peak temperature minimization, overall energy reduction, and performance maximization. ^ The novelty of this work is that we integrate the cutting-edge research on power and thermal at the circuit and architectural-level into a set of accurate yet simplified system-level models, and are able to conduct system-level analysis and design based on these models. The theoretical study in this work serves as a solid foundation for the guidance of the power/thermal-aware scheduling algorithms development in practical computing systems.^