857 resultados para Computing Classification Systems


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Computer software plays an important role in business, government, society and sciences. To solve real-world problems, it is very important to measure the quality and reliability in the software development life cycle (SDLC). Software Engineering (SE) is the computing field concerned with designing, developing, implementing, maintaining and modifying software. The present paper gives an overview of the Data Mining (DM) techniques that can be applied to various types of SE data in order to solve the challenges posed by SE tasks such as programming, bug detection, debugging and maintenance. A specific DM software is discussed, namely one of the analytical tools for analyzing data and summarizing the relationships that have been identified. The paper concludes that the proposed techniques of DM within the domain of SE could be well applied in fields such as Customer Relationship Management (CRM), eCommerce and eGovernment. ACM Computing Classification System (1998): H.2.8.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Statistics has penetrated almost all branches of science and all areas of human endeavor. At the same time, statistics is not only misunderstood, misused and abused to a frightening extent, but it is also often much disliked by students in colleges and universities. This lecture discusses/covers/addresses the historical development of statistics, aiming at identifying the most important turning points that led to the present state of statistics and at answering the questions “What went wrong with statistics?” and “What to do next?”. ACM Computing Classification System (1998): A.0, A.m, G.3, K.3.2.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dependence in the world of uncertainty is a complex concept. However, it exists, is asymmetric, has magnitude and direction, and can be measured. We use some measures of dependence between random events to illustrate how to apply it in the study of dependence between non-numeric bivariate variables and numeric random variables. Graphics show what is the inner dependence structure in the Clayton Archimedean copula and the Bivariate Poisson distribution. We know this approach is valid for studying the local dependence structure for any pair of random variables determined by its empirical or theoretical distribution. And it can be used also to simulate dependent events and dependent r/v/’s, but some restrictions apply. ACM Computing Classification System (1998): G.3, J.2.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Analysis of risk measures associated with price series data movements and its predictions are of strategic importance in the financial markets as well as to policy makers in particular for short- and longterm planning for setting up economic growth targets. For example, oilprice risk-management focuses primarily on when and how an organization can best prevent the costly exposure to price risk. Value-at-Risk (VaR) is the commonly practised instrument to measure risk and is evaluated by analysing the negative/positive tail of the probability distributions of the returns (profit or loss). In modelling applications, least-squares estimation (LSE)-based linear regression models are often employed for modeling and analyzing correlated data. These linear models are optimal and perform relatively well under conditions such as errors following normal or approximately normal distributions, being free of large size outliers and satisfying the Gauss-Markov assumptions. However, often in practical situations, the LSE-based linear regression models fail to provide optimal results, for instance, in non-Gaussian situations especially when the errors follow distributions with fat tails and error terms possess a finite variance. This is the situation in case of risk analysis which involves analyzing tail distributions. Thus, applications of the LSE-based regression models may be questioned for appropriateness and may have limited applicability. We have carried out the risk analysis of Iranian crude oil price data based on the Lp-norm regression models and have noted that the LSE-based models do not always perform the best. We discuss results from the L1, L2 and L∞-norm based linear regression models. ACM Computing Classification System (1998): B.1.2, F.1.3, F.2.3, G.3, J.2.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article shows the social importance of subsistence minimum in Georgia. The methodology of its calculation is also shown. We propose ways of improving the calculation of subsistence minimum in Georgia and how to extend it for other developing countries. The weights of food and non-food expenditures in the subsistence minimum baskets are essential in these calculations. Daily consumption value of the minimum food basket has been calculated too. The average consumer expenditures on food supply and the other expenditures to the share are considered in dynamics. Our methodology of the subsistence minimum calculation is applied for the case of Georgia. However, it can be used for similar purposes based on data from other developing countries, where social stability is achieved, and social inequalities are to be actualized. ACM Computing Classification System (1998): H.5.3, J.1, J.4, G.3.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Our modular approach to data hiding is an innovative concept in the data hiding research field. It enables the creation of modular digital watermarking methods that have extendable features and are designed for use in web applications. The methods consist of two types of modules – a basic module and an application-specific module. The basic module mainly provides features which are connected with the specific image format. As JPEG is a preferred image format on the Internet, we have put a focus on the achievement of a robust and error-free embedding and retrieval of the embedded data in JPEG images. The application-specific modules are adaptable to user requirements in the concrete web application. The experimental results of the modular data watermarking are very promising. They indicate excellent image quality, satisfactory size of the embedded data and perfect robustness against JPEG transformations with prespecified compression ratios. ACM Computing Classification System (1998): C.2.0.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Methods for representing equivalence problems of various combinatorial objects as graphs or binary matrices are considered. Such representations can be used for isomorphism testing in classification or generation algorithms. Often it is easier to consider a graph or a binary matrix isomorphism problem than to implement heavy algorithms depending especially on particular combinatorial objects. Moreover, there already exist well tested algorithms for the graph isomorphism problem (nauty) and the binary matrix isomorphism problem as well (Q-Extension). ACM Computing Classification System (1998): F.2.1, G.4.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Augmented reality is the latest among information technologies in modern electronics industry. The essence is in the addition of advanced computer graphics in real and/or digitized images. This paper gives a brief analysis of the concept and the approaches to implementing augmented reality for an expanded presentation of a digitized object of national cultural and/or scientific heritage. ACM Computing Classification System (1998): H.5.1, H.5.3, I.3.7.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We develop a simplified implementation of the Hoshen-Kopelman cluster counting algorithm adapted for honeycomb networks. In our implementation of the algorithm we assume that all nodes in the network are occupied and links between nodes can be intact or broken. The algorithm counts how many clusters there are in the network and determines which nodes belong to each cluster. The network information is stored into two sets of data. The first one is related to the connectivity of the nodes and the second one to the state of links. The algorithm finds all clusters in only one scan across the network and thereafter cluster relabeling operates on a vector whose size is much smaller than the size of the network. Counting the number of clusters of each size, the algorithm determines the cluster size probability distribution from which the mean cluster size parameter can be estimated. Although our implementation of the Hoshen-Kopelman algorithm works only for networks with a honeycomb (hexagonal) structure, it can be easily changed to be applied for networks with arbitrary connectivity between the nodes (triangular, square, etc.). The proposed adaptation of the Hoshen-Kopelman cluster counting algorithm is applied to studying the thermal degradation of a graphene-like honeycomb membrane by means of Molecular Dynamics simulation with a Langevin thermostat. ACM Computing Classification System (1998): F.2.2, I.5.3.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Well–prepared, adaptive and sustainably developing specialists are an important competitive advantage, but also one of the main challenges for businesses. One option of the education system for creation and development of staff adequate to the needs is the development of pro jects with topics from real economy ("Practical Projects"). The objective assessment is an essential driver and motivator, and is based on a system of well-chosen, well-defined and specific criteria and indicators. An approach to a more objective evaluation of practical projects is finding more objective weights of the criteria. A natural and reasonable approach is the accumulation of opinions of proven experts and subsequent bringing out the weights from the accumulated data. The preparation and conduction of a survey among recognized experts in the field of project-based learning in mathematics, informatics and information technologies is described. The processing of the data accumulated by applying AHP, allowed us to objectively determine weights of evaluation criteria and hence to achieve the desired objectiveness. ACM Computing Classification System (1998): K.3.2.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dry eye syndrome is a multifactorial disease of the tear film, resulting from the instability of the lacrimal functional unit that produces volume change, up or tear distribution. In patients in intensive care the cause is enhanced due to various risk factors, such as mechanical ventilation, sedation, lagophthalmos, low temperatures, among others. The study's purpose is to build an assessment tool of Dry Eye Severity in patients in intensive care units based on the systematization of nursing care and their classification systems. The aim of this study is to build an assessment tool of Dry Eye Severity in hospitalized patients in Care Unit Intensiva.Trata is a methodological study conducted in three stages, namely: context analysis, concept analysis, construction of operational definitions and magnitudes of nursing outcome. For the first step we used the methodological framework for Hinds, Chaves and Cypress (1992). For the second step we used the model of Walker and Avant and an integrative review Whitemore seconds, Knalf (2005). This step enabled the identification of the concept of attributes, background and consequent ground and the construction of the settings for the result of nursing severity of dry eye. For the construction of settings and operational magnitudes, it was used Psicometria proposed by Pasquali (1999). As a result of context analysis, visualized from the reflection that the matter should be discussed and that nursing needs to pay attention to the problem of eye injury, so minimizing strategies are created this event with a high prevalence. With the integrative review were located from the crosses 19 853 titles, selected 215, and from the abstracts 96 articles were read in full. From reading 10 were excluded culminating in the sample of 86 articles that were used to analyze the concept and construction of settings. Selected articles were found in greater numbers in the Scopus database (55.82%), performed in the United States (39.53%), and published mainly in the last five years (48.82). Regarding the concept of analysis were identified as antecedents: age, lagophthalmos, environmental factors, medication use, systemic diseases, mechanical ventilation and ophthalmic surgery. As attributes: TBUT <10s, Schimer I test <5 mm in Schimer II test <10mm, reduced osmolarity. As consequential: the ocular surface damage, ocular discomfort, visual instability. The settings were built and added indicators such as: decreased blink mechanism and eyestrain.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Se estudia la codificación y organización del campo de las Ciencias de la Comunicación, fundamentalmente los códigos Unesco. El objetivo es plantear un cambio en estos códigos, puesto que la forma de clasificar y organizar un dominio científico tiene consecuencias de naturaleza operativa y epistemológica en el propio trabajo científico. En este trabajo se estudia la clasificación actual, que muestra una presencia escasa y dispersa de los términos vinculados a Comunicación. Se describen las dificultades prácticas y teóricas que conlleva su reorganización, las posibles fuentes (planes de estudio, congresos, revistas científicas, propuestas documentales y palabras clave) y los métodos de trabajo que se pueden emplear, tomando como bases teóricas el dominio de la Organización del Conocimiento y el de la Comunicación. Por último, se analizan dos ámbitos disciplinares diferentes (Historia de la Comunicación y Tecnologías de la Comunicación), mediante la información recogida, en una base de datos, de asignaturas en grado y en másteres oficiales de 12 universidades españolas. También se observa que este tipo de propuesta requiere el conocimiento derivado de instrumentos documentales como las clasificaciones y los tesauros.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Taxonomies have gained a broad usage in a variety of fields due to their extensibility, as well as their use for classification and knowledge organization. Of particular interest is the digital document management domain in which their hierarchical structure can be effectively employed in order to organize documents into content-specific categories. Common or standard taxonomies (e.g., the ACM Computing Classification System) contain concepts that are too general for conceptualizing specific knowledge domains. In this paper we introduce a novel automated approach that combines sub-trees from general taxonomies with specialized seed taxonomies by using specific Natural Language Processing techniques. We provide an extensible and generalizable model for combining taxonomies in the practical context of two very large European research projects. Because the manual combination of taxonomies by domain experts is a highly time consuming task, our model measures the semantic relatedness between concept labels in CBOW or skip-gram Word2vec vector spaces. A preliminary quantitative evaluation of the resulting taxonomies is performed after applying a greedy algorithm with incremental thresholds used for matching and combining topic labels.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Semantic Annotation component is a software application that provides support for automated text classification, a process grounded in a cohesion-centered representation of discourse that facilitates topic extraction. The component enables the semantic meta-annotation of text resources, including automated classification, thus facilitating information retrieval within the RAGE ecosystem. It is available in the ReaderBench framework (http://readerbench.com/) which integrates advanced Natural Language Processing (NLP) techniques. The component makes use of Cohesion Network Analysis (CNA) in order to ensure an in-depth representation of discourse, useful for mining keywords and performing automated text categorization. Our component automatically classifies documents into the categories provided by the ACM Computing Classification System (http://dl.acm.org/ccs_flat.cfm), but also into the categories from a high level serious games categorization provisionally developed by RAGE. English and French languages are already covered by the provided web service, whereas the entire framework can be extended in order to support additional languages.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract - This study investigates the effect of solid dispersions prepared from of polyethylene glycol (PEG) 3350 and 6000 Da alone or combined with the non-ionic surfactant Tween 80 on the solubility and dissolution rate of a poorly soluble drug eprosartan mesylate (ESM) in attempt to improve its bioavailability following its oral administration.

INTRODUCTION

ESM is a potent anti-hypertension [1]. It has low water solubility and is classified as a Class II drug as per the Biopharmaceutical Classification Systems (BCS) leading to low and variable oral bioavailability (approximately 13%). [2]. Thus, improving ESM solubility and/or dissolution rate would eventually improve the drug bioavailability. Solid dispersion is widely used technique to improve the water solubility of poorly water-soluble drugs employing various biocompatible polymers. In this study, we aimed to enhance the solubility and dissolution of EMS employing solid dispersion (SD) formulated from two grades of poly ethylene glycol (PEG) polymers (i.e. PEG 3350 & PEG 6000 Da) either individually or in combination with Tween 80.

MATERIALS AND METHODS

ESM SDs were prepared by solvent evaporation method using either PEG 3350 or PEG 6000 at various (drug: polymer, w/w) ratios 1:1, 1:2, 1:3, 1:4, 1:5 alone or combined with Tween 80 added at fixed percentage of 0.1 of drug by weight?. Physical mixtures (PMs) of drug and carriers were also prepared at same ratios. Drug solid dispersions and physical mixtures were characterized in terms of drug content, drug dissolution using dissolution apparatus USP II and assayed using HPLC method. Drug dissolution enhancement ratio (ER %) from SD in comparison to the plain drug was calculated. Drug-polymer interactions were evaluated using Differential Scanning Calorimetry (DSC) and FT-IR.

RESULTS AND DISCUSSION

The in vitro solubility and dissolution studies showed SDs prepared using both polymers produced a remarkable improvement (p<0.05) in comparison to the plain drug which reached around 32% (Fig. 1). The dissolution enhancement ratio was polymer type and concentration-dependent. Adding Tween 80 to the SD did not show further dissolution enhancement but reduced the required amount of the polymer to get the same dissolution enhancement. The DSC and FT-IR studies indicated that using SD resulted in transformation of drug from crystalline to amorphous form.

CONCLUSIONS

This study indicated that SDs prepared by using both polymers i.e. PEG 3350 and PEG 6000 improved the in-vitro solubility and dissolution of ESM remarkably which may result in improving the drug bioavailability in vivo.

Acknowledgments

This work is a part of MSc thesis of O.M. Ali at the Faculty of Pharmacy, Aleppo University, Syria.

REFERENCES

[1] Ruilope L, Jager B: Eprosartan for the treatment of hypertension. Expert Opin Pharmacother 2003; 4(1):107-14

[2] Tenero D, Martin D, Wilson B, Jushchyshyn J, Boike S, Lundberg, D, et al. Pharmacokinetics of intravenously and orally administered Eprosartan in healthy males: absolute bioavailability and effect of food. Biopharm Drug Dispos 1998; 19(6): 351- 6.