921 resultados para Data Storage Solutions
Resumo:
One major methodological problem in analysis of sequence data is the determination of costs from which distances between sequences are derived. Although this problem is currently not optimally dealt with in the social sciences, it has some similarity with problems that have been solved in bioinformatics for three decades. In this article, the authors propose an optimization of substitution and deletion/insertion costs based on computational methods. The authors provide an empirical way of determining costs for cases, frequent in the social sciences, in which theory does not clearly promote one cost scheme over another. Using three distinct data sets, the authors tested the distances and cluster solutions produced by the new cost scheme in comparison with solutions based on cost schemes associated with other research strategies. The proposed method performs well compared with other cost-setting strategies, while it alleviates the justification problem of cost schemes.
Resumo:
The constant scientific production in the universities and in the research centers makes these organizations produce and acquire a great amount of data in a short period of time. Due to the big quantity of data, the research organizations become potentially vulnerable to the impacts on information booms that may cause a chaos as far as information management is concerned. In this context, the development of data catalogues comes up as one possible solution to the problems such as (I) the organization and (II) the data management. In the scientific scope, the data catalogues are implemented with the standard for digital and geospatial metadata and are broadly utilized in the process of producing a catalogue of scientific information. The aim of this work is to present the characteristics of access and storage of metadata in databank systems in order to improve the description and dissemination of scientific data. Relevant aspects will be considered and they should be analyzed during the stage of planning, once they can determine the success of implementation. The use of data catalogues by research organizations may be a way to promote and facilitate the dissemination of scientific data, avoid the repetition of efforts while being executed, as well as incentivate the use of collected, processed an also stored.
Resumo:
This article describes a method for determining the polydispersity index Ip2=Mz/Mw of the molecular weight distribution (MWD) of linear polymeric materials from linear viscoelastic data. The method uses the Mellin transform of the relaxation modulus of a simple molecular rheological model. One of the main features of this technique is that it enables interesting MWD information to be obtained directly from dynamic shear experiments. It is not necessary to achieve the relaxation spectrum, so the ill-posed problem is avoided. Furthermore, a determinate shape of the continuous MWD does not have to be assumed in order to obtain the polydispersity index. The technique has been developed to deal with entangled linear polymers, whatever the form of the MWD is. The rheological information required to obtain the polydispersity index is the storage G′(ω) and loss G″(ω) moduli, extending from the terminal zone to the plateau region. The method provides a good agreement between the proposed theoretical approach and the experimental polydispersity indices of several linear polymers for a wide range of average molecular weights and polydispersity indices. It is also applicable to binary blends.
Resumo:
Soil slope instability concerning highway infrastructure is an ongoing problem in Iowa, as slope failures endanger public safety and continue to result in costly repair work. While in the past extensive research has been conducted on slope stability investigations and analysis, this current research study consists of field investigations addressing both the characterization and reinforcement of such slope failures. While Volume I summarizes the research methods and findings of this study, Volume II provides procedural details for incorporating an infrequently-used testing technique, borehole shear tests, into practice. Fifteen slopes along Iowa highways were investigated, including thirteen slides (failed slopes), one unfailed slope, and one proposed embankment slope (the Sugar Creek Project). The slopes are mainly comprised of either clay shale or glacial till, and are generally gentle and of small scale, with slope angle ranging from 11 deg to 23 deg and height ranging from 6 to 23 m. Extensive field investigations and laboratory tests were performed for each slope. Field investigations included survey of slope geometry, borehole drilling, soil sampling, in-situ Borehole Shear Testing (BST) and ground water table measurement. Laboratory investigations mainly comprised of ring shear tests, soil basic property tests (grain size analysis and Atterberg limits test), mineralogy analyses, soil classifications, and natural water contents and density measurements on the representative soil samples from each slope. Extensive direct shear tests and a few triaxial compression tests and unconfined compression tests were also performed on undisturbed soil samples for the Sugar Creek Project. Based on the results of field and lab investigations, slope stability analysis was performed on each of the slopes to determine the possible factors resulting in the slope failures or to evaluate the potential slope instabilities using limit equilibrium methods. Deterministic slope analyses were performed for all the slopes. Probabilistic slope analysis and sensitivity study were also performed for the slope of the Sugar Creek Project. Results indicate that while the in-situ test rapidly provides effective shear strength parameters of soils, some training may be required for effective and appropriate use of the BST. Also, it is primarily intended to test cohesive soils and can produce erroneous results in gravelly soils. Additionally, the quality of boreholes affects test results, and disturbance to borehole walls should be minimized before test performance. A final limitation of widespread borehole shear testing may be its limited availability, as only about four to six test devices are currently being used in Iowa. Based on the data gathered in the field testing, reinforcement investigations are continued in Volume III.
Resumo:
OBJECTIVE: To determine if the results of resin-dentin microtensile bond strength (µTBS) is correlated with the outcome parameters of clinical studies on non-retentive Class V restorations. METHODS: Resin-dentin µTBS data were obtained from one test center; the in vitro tests were all performed by the same operator. The µTBS testing was performed 8h after bonding and after 6 months of storing the specimens in water. Pre-test failures (PTFs) of specimens were included in the analysis, attributing them a value of 1MPa. Prospective clinical studies on cervical restorations (Class V) with an observation period of at least 18 months were searched in the literature. The clinical outcome variables were retention loss, marginal discoloration and marginal integrity. Furthermore, an index was formulated to be better able to compare the laboratory and clinical results. Estimates of adhesive effects in a linear mixed model were used to summarize the clinical performance of each adhesive between 12 and 36 months. Spearman correlations between these clinical performances and the µTBS values were calculated subsequently. RESULTS: Thirty-six clinical studies with 15 adhesive/restorative systems for which µTBS data were also available were included in the statistical analysis. In general 3-step and 2-step etch-and-rinse systems showed higher bond strength values than the 2-step/3-step self-etching systems, which, however, produced higher values than the 1-step self-etching and the resin modified glass ionomer systems. Prolonged water storage of specimens resulted in a significant decrease of the mean bond strength values in 5 adhesive systems (Wilcoxon, p<0.05). There was a significant correlation between µTBS values both after 8h and 6 months of storage and marginal discoloration (r=0.54 and r=0.67, respectively). However, the same correlation was not found between µTBS values and the retention rate, clinical index or marginal integrity. SIGNIFICANCE: As µTBS data of adhesive systems, especially after water storage for 6 months, showed a good correlation with marginal discoloration in short-term clinical Class V restorations, longitudinal clinical trials should explore whether early marginal staining is predictive for future retention loss in non-carious cervical restorations.
Resumo:
The broad aim of biomedical science in the postgenomic era is to link genomic and phenotype information to allow deeper understanding of the processes leading from genomic changes to altered phenotype and disease. The EuroPhenome project (http://www.EuroPhenome.org) is a comprehensive resource for raw and annotated high-throughput phenotyping data arising from projects such as EUMODIC. EUMODIC is gathering data from the EMPReSSslim pipeline (http://www.empress.har.mrc.ac.uk/) which is performed on inbred mouse strains and knock-out lines arising from the EUCOMM project. The EuroPhenome interface allows the user to access the data via the phenotype or genotype. It also allows the user to access the data in a variety of ways, including graphical display, statistical analysis and access to the raw data via web services. The raw phenotyping data captured in EuroPhenome is annotated by an annotation pipeline which automatically identifies statistically different mutants from the appropriate baseline and assigns ontology terms for that specific test. Mutant phenotypes can be quickly identified using two EuroPhenome tools: PhenoMap, a graphical representation of statistically relevant phenotypes, and mining for a mutant using ontology terms. To assist with data definition and cross-database comparisons, phenotype data is annotated using combinations of terms from biological ontologies.
Resumo:
AIMS: To explore, both among patients with diabetes and healthcare professionals, opinions on current diabetes care and the development of the "Regional Diabetes Program". METHODS: We employed qualitative methods (focus groups - FG) and used purposive sampling strategy to recruit patients with diabetes and healthcare professionals. We conducted one diabetic and one professional FG in each of the four health regions of the canton of Vaud/Switzerland. The eight FGs were audio-taped and transcribed verbatim. Thematic analysis was then undertaken. RESULTS: Results showed variability in the perception of the quality of diabetes care, pointed to insufficient information regarding diabetes, and lack of collaboration. Participants also evoked patients' difficulties for self-management, as well as professionals' and patients' financial concerns. Proposed solutions included reinforcing existing structures, developing self-management education, and focusing on comprehensive and coordinated care, communication and teamwork. Patients and professionals were in favour of a "Regional Diabetes Program" tailored to the actors' needs, and viewed it as a means to reinforce existing care delivery. CONCLUSIONS: Patients and professionals pointed out similar problems and solutions but explored them differently. Combined with coming quantitative data, these results should help to further develop, adapt and implement the "Regional Diabetes Program".
Resumo:
The main goal of CleanEx is to provide access to public gene expression data via unique gene names. A second objective is to represent heterogeneous expression data produced by different technologies in a way that facilitates joint analysis and cross-data set comparisons. A consistent and up-to-date gene nomenclature is achieved by associating each single experiment with a permanent target identifier consisting of a physical description of the targeted RNA population or the hybridization reagent used. These targets are then mapped at regular intervals to the growing and evolving catalogues of human genes and genes from model organisms. The completely automatic mapping procedure relies partly on external genome information resources such as UniGene and RefSeq. The central part of CleanEx is a weekly built gene index containing cross-references to all public expression data already incorporated into the system. In addition, the expression target database of CleanEx provides gene mapping and quality control information for various types of experimental resource, such as cDNA clones or Affymetrix probe sets. The web-based query interfaces offer access to individual entries via text string searches or quantitative expression criteria. CleanEx is accessible at: http://www.cleanex.isb-sib.ch/.
Resumo:
1. Few examples of habitat-modelling studies of rare and endangered species exist in the literature, although from a conservation perspective predicting their distribution would prove particularly useful. Paucity of data and lack of valid absences are the probable reasons for this shortcoming. Analytic solutions to accommodate the lack of absence include the ecological niche factor analysis (ENFA) and the use of generalized linear models (GLM) with simulated pseudo-absences. 2. In this study we tested a new approach to generating pseudo-absences, based on a preliminary ENFA habitat suitability (HS) map, for the endangered species Eryngium alpinum. This method of generating pseudo-absences was compared with two others: (i) use of a GLM with pseudo-absences generated totally at random, and (ii) use of an ENFA only. 3. The influence of two different spatial resolutions (i.e. grain) was also assessed for tackling the dilemma of quality (grain) vs. quantity (number of occurrences). Each combination of the three above-mentioned methods with the two grains generated a distinct HS map. 4. Four evaluation measures were used for comparing these HS maps: total deviance explained, best kappa, Gini coefficient and minimal predicted area (MPA). The last is a new evaluation criterion proposed in this study. 5. Results showed that (i) GLM models using ENFA-weighted pseudo-absence provide better results, except for the MPA value, and that (ii) quality (spatial resolution and locational accuracy) of the data appears to be more important than quantity (number of occurrences). Furthermore, the proposed MPA value is suggested as a useful measure of model evaluation when used to complement classical statistical measures. 6. Synthesis and applications. We suggest that the use of ENFA-weighted pseudo-absence is a possible way to enhance the quality of GLM-based potential distribution maps and that data quality (i.e. spatial resolution) prevails over quantity (i.e. number of data). Increased accuracy of potential distribution maps could help to define better suitable areas for species protection and reintroduction.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
The Iowa Department of Natural Resources (IDNR) has requested the Iowa Department of Public Health (IDPH) Hazardous Waste Site Health Assessment Program evaluate future health impacts of exposures at a former aboveground storage tank site located in Rolfe, Iowa. The former aboveground storage tank site is located to the southwest of the intersection of Railroad Street and 300th Avenue in Rolfe, Iowa. This site is undergoing a Targeted Brownfields Assessment conducted by the Contaminated Sites Section of the IDNR. This health consultation addresses potential health risks to people from future exposure to the soil within the property boundary, and any health impacts resulting from contaminated groundwater beneath the site property. The information in this health consultation was current at the time of writing. Data that emerges later could alter this document’s conclusions and recommendations.
Resumo:
The determination of sediment storage is a critical parameter in sediment budget analyses. But, in many sediment budget studies the quantification of magnitude and time-scale of sediment storage is still the weakest part and often relies on crude estimations only, especially in large drainage basins (>100km2). We present a new approach to storage quantification in a meso-scale alpine catchment of the Swiss Alps (Turtmann Valley, 110km2). The quantification of depositional volumes was performed by combining geophysical surveys and geographic information system (GIS) modelling techniques. Mean thickness values of each landform type calculated from these data was used to estimate the sediment volume in the hanging valleys and the trough slopes. Sediment volume of the remaining subsystems was determined by modelling an assumed parabolic bedrock surface using digital elevation model (DEM) data. A total sediment volume of 781·3×106?1005·7×106m3 is deposited in the Turtmann Valley. Over 60% of this volume is stored in the 13 hanging valleys. Moraine landforms contain over 60% of the deposits in the hanging valleys followed by sediment stored on slopes (20%) and rock glaciers (15%). For the first time, a detailed quantification of different storage types was achieved in a catchment of this size. Sediment volumes have been used to calculate mean denudation rates for the different processes ranging from 0·1 to 2·6mm/a based on a time span of 10ka. As the quantification approach includes a number of assumptions and various sources of error the values given represent the order of magnitude of sediment storage that has to be expected in a catchment of this size.
Resumo:
Kiihtyvä kilpailu yritysten välillä on tuonut yritykset vaikeidenhaasteiden eteen. Tuotteet pitäisi saada markkinoille nopeammin, uusien tuotteiden pitäisi olla parempia kuin vanhojen ja etenkin parempia kuin kilpailijoiden vastaavat tuotteet. Lisäksi tuotteiden suunnittelu-, valmistus- ja muut kustannukset eivät saisi olla suuria. Näiden haasteiden toteuttamisessa yritetään usein käyttää apuna tuotetietoja, niiden hallintaa ja vaihtamista. Andritzin, kuten muidenkin yritysten, on otettava nämä asiat huomioon pärjätäkseen kilpailussa. Tämä työ on tehty Andritzille, joka on maailman johtavia paperin ja sellun valmistukseen tarkoitettujen laitteiden valmistajia ja huoltopalveluiden tarjoajia. Andritz on ottamassa käyttöön ERP-järjestelmän kaikissa toimipisteissään. Sitä halutaan hyödyntää mahdollisimman tehokkaasti, joten myös tuotetiedot halutaan järjestelmään koko elinkaaren ajalta. Osan tuotetiedoista luo Andritzin kumppanit ja alihankkijat, joten myös tietojen vaihto partnereiden välillä halutaan hoitaasiten, että tiedot saadaan suoraan ERP-järjestelmään. Tämän työn tavoitteena onkin löytää ratkaisu, jonka avulla Andritzin ja sen kumppaneiden välinen tietojenvaihto voidaan hoitaa. Tämä diplomityö esittelee tuotetietojen, niiden hallinnan ja vaihtamisen tarkoituksen ja tärkeyden. Työssä esitellään erilaisia ratkaisuvaihtoehtoja tiedonvaihtojärjestelmän toteuttamiseksi. Osa niistä perustuu yleisiin ja toimialakohtaisiin standardeihin. Myös kaksi kaupallista tuotetta esitellään. Tarkasteltavana onseuraavat standardit: PaperIXI, papiNet, X-OSCO, PSK-standardit sekä RosettaNet. Lisäksi työssä tarkastellaan ERP-järjestelmän toimittajan, SAP:in ratkaisuja tietojenvaihtoon. Näistä vaihtoehdoista parhaimpia tarkastellaan vielä yksityiskohtaisemmin ja lopuksi eri ratkaisuja vertaillaan keskenään, jotta löydettäisiin Andritzin tarpeisiin paras vaihtoehto.
Resumo:
Työn tavoitteena oli selvittää, kumpi kahdesta varastointitavasta, oma varastointi vai toimittajavarastointi, on tutkimuksen toimeksiantajayrityksen kannalta edullisempi vaihtoehto varastoida kunnossapidon varaosia. Tarkoituksena oli selvittää case-yrityksen oman varastoinnin aiheuttamat kustannukset ja verrata niitä toimittajavarastoinnin kustannuksiin. Varastointikustannuksia tarkasteltiin tutkimukseen valittujen varaosanimikkeiden avulla. Kyseessä oli kvalitatiivinen tutkimus, jossa aineistonkeruumenetelminä käytettiin haastatteluja ja erilaisia valmiita dokumentteja. Työn teoriaosassa käsiteltiin varastointia, varastoinnin ulkoistamista, toimittajavarastointia sekä varastoinnista aiheutuneita kustannuksia. Teoriaosa pohjautui aikaisempiin tutkimuksiin ja alan kirjallisuuteen. Empiirisessä osassa perehdyttiin tutkimuksen toimeksiantajayrityksen varaosien varastointiin sekä selvitettiin tarkasteluun valittujen varaosanimikkeiden osalta varastoinnista aiheutuneet kustannukset. Tutkimuksen tulosten pohjalta tehtiin taulukkolaskentaohjelmalla niin kutsuttu simulointimalli, jonka avulla tutkimuksen varastointikustannuksiin liittyviä laskelmia voidaan käyttää hyväksi myös case-yrityksen muiden varaosanimikkeiden kohdalla. Malli tarjoaa yksinkertaisen tavan kohdistaa varastointikustannuksia varaosanimikkeille. Mallin avulla on siis mahdollista saada suuntaa-antaviavastauksia oman varastoinnin ja toimittajavarastoinnin kannattavuuden vertailussa.
Resumo:
Normally either the Güntelberg or Davies equation is used to predict activity coefficients of electrolytes in dilute solutions when no better equation is available. The validity of these equations and, additionally, of the parameter-free equations used in the Bates-Guggenheim convention and in the Pitzerformalism for activity coefficients were tested with experimentally determined activity coefficients of HCl, HBr, HI, LiCl, NaCl, KCl, RbCl, CsCl, NH4Cl, LiBr,NaBr and KBr in aqueous solutions at 298.15 K. The experimental activity coefficients of these electrolytes can be usually reproduced within experimental errorby means of a two-parameter equation of the Hückel type. The best Hückel equations were also determined for all electrolytes considered. The data used in the calculations of this study cover almost all reliable galvanic cell results available in the literature for the electrolytes considered. The results of the calculations reveal that the parameter-free activity coefficient equations can only beused for very dilute electrolyte solutions in thermodynamic studies.