930 resultados para System Computing


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article presents the principal results of the doctoral thesis “Direct Operational Methods in the Environment of a Computer Algebra System” by Margarita Spiridonova (Institute of mathematics and Informatics, BAS), successfully defended before the Specialised Academic Council for Informatics and Mathematical Modelling on 23 March, 2009.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ACM Computing Classification System (1998): H.5.2, H.2.8, J.2, H.5.3.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ACM Computing Classification System (1998): G.2.2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ACM Computing Classification System (1998): G.2.2, G.2.3.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose a refinement of some successive overrelaxation methods based on the reverse Gauss–Seidel method for solving a system of linear equations Ax = b by the decomposition A = Tm − Em − Fm, where Tm is a banded matrix of bandwidth 2m + 1. We study the convergence of the methods and give software implementation of algorithms in Mathematica package with numerical examples. ACM Computing Classification System (1998): G.1.3.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a PC-based mainframe computer emulator called VisibleZ and its use in teaching mainframe Computer Organization and Assembly Programming classes. VisibleZ models IBM’s z/Architecture and allows direct interpretation of mainframe assembly language object code in a graphical user interface environment that was developed in Java. The VisibleZ emulator acts as an interactive visualization tool to simulate enterprise computer architecture. The provided architectural components include main storage, CPU, registers, Program Status Word (PSW), and I/O Channels. Particular attention is given to providing visual clues to the user by color-coding screen components, machine instruction execution, and animation of the machine architecture components. Students interact with VisibleZ by executing machine instructions in a step-by-step mode, simultaneously observing the contents of memory, registers, and changes in the PSW during the fetch-decode-execute machine instruction cycle. The object-oriented design and implementation of VisibleZ allows students to develop their own instruction semantics by coding Java for existing specific z/Architecture machine instructions or design and implement new machine instructions. The use of VisibleZ in lectures, labs, and assignments is described in the paper and supported by a website that hosts an extensive collection of related materials. VisibleZ has been proven a useful tool in mainframe Assembly Language Programming and Computer Organization classes. Using VisibleZ, students develop a better understanding of mainframe concepts, components, and how the mainframe computer works. ACM Computing Classification System (1998): C.0, K.3.2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adaptability for distributed object-oriented enterprise frameworks in multimedia technology is a critical mission for system evolution. Today, building adaptive services is a complex task due to lack of adequate framework support in the distributed computing systems. In this paper, we propose a Metalevel Component-Based Framework which uses distributed computing design patterns as components to develop an adaptable pattern-oriented framework for distributed computing applications. We describe our approach of combining a meta-architecture with a pattern-oriented framework, resulting in an adaptable framework which provides a mechanism to facilitate system evolution. This approach resolves the problem of dynamic adaptation in the framework, which is encountered in most distributed multimedia applications. The proposed architecture of the pattern-oriented framework has the abilities to dynamically adapt new design patterns to address issues in the domain of distributed computing and they can be woven together to shape the framework in future. © 2011 Springer Science+Business Media B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the use of Bluetooth and Java-Based technologies in developing a multi-player mobile game in ubiquitous computing, which strongly depends on automatic contextual reconfiguration and context-triggered actions. Our investigation focuses on an extended form of ubiquitous computing which game software developers utilize to develop games for players. We have developed an experimental ubiquitous computing application that provides context-aware services to game server and game players in a mobile distributed computing system. Obviously, contextual services provide useful information in a context-aware system. However, designing a context-aware game is still a daunting task and much theoretical and practical research remains to be done to reach the ubiquitous computing era. In this paper, we present the overall architecture and discuss, in detail, the implementation steps taken to create a Bluetooth and Java based context-aware game. We develop a multi-player game server and prepare the client and server codes in ubiquitous computing, providing adaptive routines to handle connection information requests, logging and context formatting and delivery for automatic contextual reconfiguration and context-triggered actions. © 2010 Binary Information Press.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Napjaink informatikai világának talán legkeresettebb hívó szava a cloud computing, vagy magyar fordításban, a számítási felhő. A fordítás forrása az EU-s (Digitális Menetrend magyar változata, 2010) A számítási felhő üzleti modelljének részletes leírását adja (Bőgel, 2009). Bőgel György ismerteti az új, közműszerű informatikai szolgáltatás kialakulását és gazdasági előnyeit, nagy jövőt jósolva a számítási felhőnek az üzleti modellek versenyében. A szerző – a számítási felhő üzleti előnyei mellett – nagyobb hangsúlyt fektet dolgozatában a gyors elterjedést gátló tényezőkre, és arra, hogy mit jelentenek az előnyök és a hátrányok egy üzleti, informatikai vagy megfelelőségi vezető számára. Nem csökkentve a cloud modell gazdasági jelentőségét, fontosnak tartja, hogy a problémákról és a kockázatokról is szóljon. Kiemeli, hogy a kockázatokban – különösen a biztonsági és adatvédelmi kockázatokban – lényeges különbségek vannak az Európai Gazdasági Térség és a világ többi része, pl. az Amerikai Egyesült Államok között. A cikkben rámutat ezekre a különbségekre, és az olvasó magyarázatot kap arra is, hogy miért várható a számítási felhő lassabb terjedése Európában, mint a világ más részein. Bemutatja az EU erőfeszítéseit is a számítási felhő európai terjedésének elősegítésére, tekintettel a modell versenyképességet növelő hatására. / === / One of the most popular concept of the recent web searches is cloud computing. Several authors present detailed description of the new service model and it's business benefits and cite the optimistic prognoses of the cloud experts regarding the competition of information system service models. The author analyses the operational benefits of the cloud application and give a detailed description of the inhibitors of the fast expansion of the service modell. He also analyses the pros and cons of the cloud for a business manager, an information and a compliance officer. When understanding the advantages of the cloud, it is equally important to review the problems and risks associated with the model. The paper gives a list of the expected cloud-specific risks. It also explains the differences in security and data protection approach between the European Economic Area and the rest of the world, including the USA. The explains why slower expansion of the cloud modell is expected in Europe than in the rest of the world. The efforts of the EU Committee in helping to spread the cloud model is also presented, as the EU's officers consider the model as an important element of competitiveness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computers have dramatically changed the way we live, conduct business, and deliver education. They have infiltrated the Bahamian public school system to the extent that many educators now feel the need for a national plan. The development of such a plan is a challenging undertaking, especially in developing countries where physical, financial, and human resources are scarce. This study assessed the situation with regard to computers within the Bahamian public school system, and provided recommended guidelines to the Bahamian government based on the results of a survey, the body of knowledge about trends in computer usage in schools, and the country's needs. ^ This was a descriptive study for which an extensive review of literature in areas of computer hardware, software, teacher training, research, curriculum, support services and local context variables was undertaken. One objective of the study was to establish what should or could be relative to the state-of-the-art in educational computing. A survey was conducted involving 201 teachers and 51 school administrators from 60 randomly selected Bahamian public schools. A random stratified cluster sampling technique was used. ^ This study used both quantitative and qualitative research methodologies. Quantitative methods were used to summarize the data about numbers and types of computers, categories of software available, peripheral equipment, and related topics through the use of forced-choice questions in a survey instrument. Results of these were displayed in tables and charts. Qualitative methods, data synthesis and content analysis, were used to analyze the non-numeric data obtained from open-ended questions on teachers' and school administrators' questionnaires, such as those regarding teachers' perceptions and attitudes about computers and their use in classrooms. Also, interpretative methodologies were used to analyze the qualitative results of several interviews conducted with senior public school system's officials. Content analysis was used to gather data from the literature on topics pertaining to the study. ^ Based on the literature review and the data gathered for this study a number of recommendations are presented. These recommendations may be used by the government of the Commonwealth of The Bahamas to establish policies with regard to the use of computers within the public school system. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mediation techniques provide interoperability and support integrated query processing among heterogeneous databases. While such techniques help data sharing among different sources, they increase the risk for data security, such as violating access control rules. Successful protection of information by an effective access control mechanism is a basic requirement for interoperation among heterogeneous data sources. ^ This dissertation first identified the challenges in the mediation system in order to achieve both interoperability and security in the interconnected and collaborative computing environment, which includes: (1) context-awareness, (2) semantic heterogeneity, and (3) multiple security policy specification. Currently few existing approaches address all three security challenges in mediation system. This dissertation provides a modeling and architectural solution to the problem of mediation security that addresses the aforementioned security challenges. A context-aware flexible authorization framework was developed in the dissertation to deal with security challenges faced by mediation system. The authorization framework consists of two major tasks, specifying security policies and enforcing security policies. Firstly, the security policy specification provides a generic and extensible method to model the security policies with respect to the challenges posed by the mediation system. The security policies in this study are specified by 5-tuples followed by a series of authorization constraints, which are identified based on the relationship of the different security components in the mediation system. Two essential features of mediation systems, i. e., relationship among authorization components and interoperability among heterogeneous data sources, are the focus of this investigation. Secondly, this dissertation supports effective access control on mediation systems while providing uniform access for heterogeneous data sources. The dynamic security constraints are handled in the authorization phase instead of the authentication phase, thus the maintenance cost of security specification can be reduced compared with related solutions. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Security remains a top priority for organizations as their information systems continue to be plagued by security breaches. This dissertation developed a unique approach to assess the security risks associated with information systems based on dynamic neural network architecture. The risks that are considered encompass the production computing environment and the client machine environment. The risks are established as metrics that define how susceptible each of the computing environments is to security breaches. ^ The merit of the approach developed in this dissertation is based on the design and implementation of Artificial Neural Networks to assess the risks in the computing and client machine environments. The datasets that were utilized in the implementation and validation of the model were obtained from business organizations using a web survey tool hosted by Microsoft. This site was designed as a host site for anonymous surveys that were devised specifically as part of this dissertation. Microsoft customers can login to the website and submit their responses to the questionnaire. ^ This work asserted that security in information systems is not dependent exclusively on technology but rather on the triumvirate people, process and technology. The questionnaire and consequently the developed neural network architecture accounted for all three key factors that impact information systems security. ^ As part of the study, a methodology on how to develop, train and validate such a predictive model was devised and successfully deployed. This methodology prescribed how to determine the optimal topology, activation function, and associated parameters for this security based scenario. The assessment of the effects of security breaches to the information systems has traditionally been post-mortem whereas this dissertation provided a predictive solution where organizations can determine how susceptible their environments are to security breaches in a proactive way. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research presents several components encompassing the scope of the objective of Data Partitioning and Replication Management in Distributed GIS Database. Modern Geographic Information Systems (GIS) databases are often large and complicated. Therefore data partitioning and replication management problems need to be addresses in development of an efficient and scalable solution. ^ Part of the research is to study the patterns of geographical raster data processing and to propose the algorithms to improve availability of such data. These algorithms and approaches are targeting granularity of geographic data objects as well as data partitioning in geographic databases to achieve high data availability and Quality of Service(QoS) considering distributed data delivery and processing. To achieve this goal a dynamic, real-time approach for mosaicking digital images of different temporal and spatial characteristics into tiles is proposed. This dynamic approach reuses digital images upon demand and generates mosaicked tiles only for the required region according to user's requirements such as resolution, temporal range, and target bands to reduce redundancy in storage and to utilize available computing and storage resources more efficiently. ^ Another part of the research pursued methods for efficient acquiring of GIS data from external heterogeneous databases and Web services as well as end-user GIS data delivery enhancements, automation and 3D virtual reality presentation. ^ There are vast numbers of computing, network, and storage resources idling or not fully utilized available on the Internet. Proposed "Crawling Distributed Operating System "(CDOS) approach employs such resources and creates benefits for the hosts that lend their CPU, network, and storage resources to be used in GIS database context. ^ The results of this dissertation demonstrate effective ways to develop a highly scalable GIS database. The approach developed in this dissertation has resulted in creation of TerraFly GIS database that is used by US government, researchers, and general public to facilitate Web access to remotely-sensed imagery and GIS vector information. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With advances in science and technology, computing and business intelligence (BI) systems are steadily becoming more complex with an increasing variety of heterogeneous software and hardware components. They are thus becoming progressively more difficult to monitor, manage and maintain. Traditional approaches to system management have largely relied on domain experts through a knowledge acquisition process that translates domain knowledge into operating rules and policies. It is widely acknowledged as a cumbersome, labor intensive, and error prone process, besides being difficult to keep up with the rapidly changing environments. In addition, many traditional business systems deliver primarily pre-defined historic metrics for a long-term strategic or mid-term tactical analysis, and lack the necessary flexibility to support evolving metrics or data collection for real-time operational analysis. There is thus a pressing need for automatic and efficient approaches to monitor and manage complex computing and BI systems. To realize the goal of autonomic management and enable self-management capabilities, we propose to mine system historical log data generated by computing and BI systems, and automatically extract actionable patterns from this data. This dissertation focuses on the development of different data mining techniques to extract actionable patterns from various types of log data in computing and BI systems. Four key problems—Log data categorization and event summarization, Leading indicator identification , Pattern prioritization by exploring the link structures , and Tensor model for three-way log data are studied. Case studies and comprehensive experiments on real application scenarios and datasets are conducted to show the effectiveness of our proposed approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation presents and evaluates a methodology for scheduling medical application workloads in virtualized computing environments. Such environments are being widely adopted by providers of "cloud computing" services. In the context of provisioning resources for medical applications, such environments allow users to deploy applications on distributed computing resources while keeping their data secure. Furthermore, higher level services that further abstract the infrastructure-related issues can be built on top of such infrastructures. For example, a medical imaging service can allow medical professionals to process their data in the cloud, easing them from the burden of having to deploy and manage these resources themselves. In this work, we focus on issues related to scheduling scientific workloads on virtualized environments. We build upon the knowledge base of traditional parallel job scheduling to address the specific case of medical applications while harnessing the benefits afforded by virtualization technology. To this end, we provide the following contributions: (1) An in-depth analysis of the execution characteristics of the target applications when run in virtualized environments. (2) A performance prediction methodology applicable to the target environment. (3) A scheduling algorithm that harnesses application knowledge and virtualization-related benefits to provide strong scheduling performance and quality of service guarantees. In the process of addressing these pertinent issues for our target user base (i.e. medical professionals and researchers), we provide insight that benefits a large community of scientific application users in industry and academia. Our execution time prediction and scheduling methodologies are implemented and evaluated on a real system running popular scientific applications. We find that we are able to predict the execution time of a number of these applications with an average error of 15%. Our scheduling methodology, which is tested with medical image processing workloads, is compared to that of two baseline scheduling solutions and we find that it outperforms them in terms of both the number of jobs processed and resource utilization by 20–30%, without violating any deadlines. We conclude that our solution is a viable approach to supporting the computational needs of medical users, even if the cloud computing paradigm is not widely adopted in its current form.