115 resultados para Computing Classification Systems
Resumo:
Computer software plays an important role in business, government, society and sciences. To solve real-world problems, it is very important to measure the quality and reliability in the software development life cycle (SDLC). Software Engineering (SE) is the computing field concerned with designing, developing, implementing, maintaining and modifying software. The present paper gives an overview of the Data Mining (DM) techniques that can be applied to various types of SE data in order to solve the challenges posed by SE tasks such as programming, bug detection, debugging and maintenance. A specific DM software is discussed, namely one of the analytical tools for analyzing data and summarizing the relationships that have been identified. The paper concludes that the proposed techniques of DM within the domain of SE could be well applied in fields such as Customer Relationship Management (CRM), eCommerce and eGovernment. ACM Computing Classification System (1998): H.2.8.
Resumo:
Statistics has penetrated almost all branches of science and all areas of human endeavor. At the same time, statistics is not only misunderstood, misused and abused to a frightening extent, but it is also often much disliked by students in colleges and universities. This lecture discusses/covers/addresses the historical development of statistics, aiming at identifying the most important turning points that led to the present state of statistics and at answering the questions “What went wrong with statistics?” and “What to do next?”. ACM Computing Classification System (1998): A.0, A.m, G.3, K.3.2.
Resumo:
Dependence in the world of uncertainty is a complex concept. However, it exists, is asymmetric, has magnitude and direction, and can be measured. We use some measures of dependence between random events to illustrate how to apply it in the study of dependence between non-numeric bivariate variables and numeric random variables. Graphics show what is the inner dependence structure in the Clayton Archimedean copula and the Bivariate Poisson distribution. We know this approach is valid for studying the local dependence structure for any pair of random variables determined by its empirical or theoretical distribution. And it can be used also to simulate dependent events and dependent r/v/’s, but some restrictions apply. ACM Computing Classification System (1998): G.3, J.2.
Resumo:
Analysis of risk measures associated with price series data movements and its predictions are of strategic importance in the financial markets as well as to policy makers in particular for short- and longterm planning for setting up economic growth targets. For example, oilprice risk-management focuses primarily on when and how an organization can best prevent the costly exposure to price risk. Value-at-Risk (VaR) is the commonly practised instrument to measure risk and is evaluated by analysing the negative/positive tail of the probability distributions of the returns (profit or loss). In modelling applications, least-squares estimation (LSE)-based linear regression models are often employed for modeling and analyzing correlated data. These linear models are optimal and perform relatively well under conditions such as errors following normal or approximately normal distributions, being free of large size outliers and satisfying the Gauss-Markov assumptions. However, often in practical situations, the LSE-based linear regression models fail to provide optimal results, for instance, in non-Gaussian situations especially when the errors follow distributions with fat tails and error terms possess a finite variance. This is the situation in case of risk analysis which involves analyzing tail distributions. Thus, applications of the LSE-based regression models may be questioned for appropriateness and may have limited applicability. We have carried out the risk analysis of Iranian crude oil price data based on the Lp-norm regression models and have noted that the LSE-based models do not always perform the best. We discuss results from the L1, L2 and L∞-norm based linear regression models. ACM Computing Classification System (1998): B.1.2, F.1.3, F.2.3, G.3, J.2.
Resumo:
This article shows the social importance of subsistence minimum in Georgia. The methodology of its calculation is also shown. We propose ways of improving the calculation of subsistence minimum in Georgia and how to extend it for other developing countries. The weights of food and non-food expenditures in the subsistence minimum baskets are essential in these calculations. Daily consumption value of the minimum food basket has been calculated too. The average consumer expenditures on food supply and the other expenditures to the share are considered in dynamics. Our methodology of the subsistence minimum calculation is applied for the case of Georgia. However, it can be used for similar purposes based on data from other developing countries, where social stability is achieved, and social inequalities are to be actualized. ACM Computing Classification System (1998): H.5.3, J.1, J.4, G.3.
Resumo:
Our modular approach to data hiding is an innovative concept in the data hiding research field. It enables the creation of modular digital watermarking methods that have extendable features and are designed for use in web applications. The methods consist of two types of modules – a basic module and an application-specific module. The basic module mainly provides features which are connected with the specific image format. As JPEG is a preferred image format on the Internet, we have put a focus on the achievement of a robust and error-free embedding and retrieval of the embedded data in JPEG images. The application-specific modules are adaptable to user requirements in the concrete web application. The experimental results of the modular data watermarking are very promising. They indicate excellent image quality, satisfactory size of the embedded data and perfect robustness against JPEG transformations with prespecified compression ratios. ACM Computing Classification System (1998): C.2.0.
Resumo:
Methods for representing equivalence problems of various combinatorial objects as graphs or binary matrices are considered. Such representations can be used for isomorphism testing in classification or generation algorithms. Often it is easier to consider a graph or a binary matrix isomorphism problem than to implement heavy algorithms depending especially on particular combinatorial objects. Moreover, there already exist well tested algorithms for the graph isomorphism problem (nauty) and the binary matrix isomorphism problem as well (Q-Extension). ACM Computing Classification System (1998): F.2.1, G.4.
Resumo:
Augmented reality is the latest among information technologies in modern electronics industry. The essence is in the addition of advanced computer graphics in real and/or digitized images. This paper gives a brief analysis of the concept and the approaches to implementing augmented reality for an expanded presentation of a digitized object of national cultural and/or scientific heritage. ACM Computing Classification System (1998): H.5.1, H.5.3, I.3.7.
Resumo:
We develop a simplified implementation of the Hoshen-Kopelman cluster counting algorithm adapted for honeycomb networks. In our implementation of the algorithm we assume that all nodes in the network are occupied and links between nodes can be intact or broken. The algorithm counts how many clusters there are in the network and determines which nodes belong to each cluster. The network information is stored into two sets of data. The first one is related to the connectivity of the nodes and the second one to the state of links. The algorithm finds all clusters in only one scan across the network and thereafter cluster relabeling operates on a vector whose size is much smaller than the size of the network. Counting the number of clusters of each size, the algorithm determines the cluster size probability distribution from which the mean cluster size parameter can be estimated. Although our implementation of the Hoshen-Kopelman algorithm works only for networks with a honeycomb (hexagonal) structure, it can be easily changed to be applied for networks with arbitrary connectivity between the nodes (triangular, square, etc.). The proposed adaptation of the Hoshen-Kopelman cluster counting algorithm is applied to studying the thermal degradation of a graphene-like honeycomb membrane by means of Molecular Dynamics simulation with a Langevin thermostat. ACM Computing Classification System (1998): F.2.2, I.5.3.
Resumo:
Well–prepared, adaptive and sustainably developing specialists are an important competitive advantage, but also one of the main challenges for businesses. One option of the education system for creation and development of staff adequate to the needs is the development of pro jects with topics from real economy ("Practical Projects"). The objective assessment is an essential driver and motivator, and is based on a system of well-chosen, well-defined and specific criteria and indicators. An approach to a more objective evaluation of practical projects is finding more objective weights of the criteria. A natural and reasonable approach is the accumulation of opinions of proven experts and subsequent bringing out the weights from the accumulated data. The preparation and conduction of a survey among recognized experts in the field of project-based learning in mathematics, informatics and information technologies is described. The processing of the data accumulated by applying AHP, allowed us to objectively determine weights of evaluation criteria and hence to achieve the desired objectiveness. ACM Computing Classification System (1998): K.3.2.
Resumo:
Intrusion detection is a critical component of security information systems. The intrusion detection process attempts to detect malicious attacks by examining various data collected during processes on the protected system. This paper examines the anomaly-based intrusion detection based on sequences of system calls. The point is to construct a model that describes normal or acceptable system activity using the classification trees approach. The created database is utilized as a basis for distinguishing the intrusive activity from the legal one using string metric algorithms. The major results of the implemented simulation experiments are presented and discussed as well.
Resumo:
The paper has been presented at the 12th International Conference on Applications of Computer Algebra, Varna, Bulgaria, June, 2006
Resumo:
AMS subject classification: 49N35,49N55,65Lxx.
Resumo:
Dedicated to the memory of the late professor Stefan Dodunekov on the occasion of his 70th anniversary. We classify up to multiplier equivalence maximal (v, 3, 1) optical orthogonal codes (OOCs) with v ≤ 61 and maximal (v, 3, 2, 1) OOCs with v ≤ 99. There is a one-to-one correspondence between maximal (v, 3, 1) OOCs, maximal cyclic binary constant weight codes of weight 3 and minimum dis tance 4, (v, 3; ⌊(v − 1)/6⌋) difference packings, and maximal (v, 3, 1) binary cyclically permutable constant weight codes. Therefore the classification of (v, 3, 1) OOCs holds for them too. Some of the classified (v, 3, 1) OOCs are perfect and they are equivalent to cyclic Steiner triple systems of order v and (v, 3, 1) cyclic difference families.
Resumo:
The paper suggests a classification of dynamic rule-based systems. For each class of systems, limit behavior is studied. Systems with stabilizing limit states or stabilizing limit trajectories are identified, and such states and trajectories are found. The structure of the set of limit states and trajectories is investigated.