927 resultados para Non-ionic surfactant. Cloud point. Flory-Huggins model. UNIQUAC model. NRTL model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die Ziele der vorliegenden Arbeit waren 1) die Entwicklung und Validierung von sensitiven und substanz-spezifischen Methoden für die quantitative Bestimmung von anionischen, nichtionischen und amphoteren Tensiden und deren Metaboliten in wässrigen Umweltproben unter Einsatz leistungsfähiger, massenspektrometrischer Analysengeräte,2) die Gewinnung von aeroben, polaren Abbauprodukten aus Tensiden in einem die realen Umweltbedingungen simulierenden Labor-Festbettbioreaktor (FBBR), dessen Biozönose oberflächenwasserbürtig war,3) zur Aufklärung des Abbaumechanismus von Tensiden neue, in 2) gewonnene Metabolite zu identifizieren und massenspektrometrisch zu charakterisieren ebenso wie den Primärabbau und den weiteren Abbau zu verfolgen,4) durch quantitative Untersuchungen von Tensiden und deren Abbauprodukten in Abwasser und Oberflächenwasser Informationen zu ihrem Eintrag und Verhalten bei unterschiedlichen hydrologischen und klimatischen Bedingungen zu erhalten,5) das Verhalten von persistenten Tensidmetaboliten in Wasserwerken, die belastetes Oberflächenwasser aufbereiten, zu untersuchen und deren Vorkommen im Trinkwasser zu bestimmen,6) mögliche Schadwirkungen von neu entdeckten Metabolite mittels ökotoxikologischer Biotests abzuschätzen,7) durch Vergleich der Felddaten mit den Ergebnissen der Laborversuche die Umweltrelevanz der Abbaustudien zu belegen. Die Auswahl der untersuchten Verbindungen erfolgte unter Berücksichtigung ihres Produktionsvolumens und der Neuheit auf dem Tensidmarkt. Sie umfasste die Waschmittelinhaltsstoffe lineare Alkylbenzol-sulfonate (LAS), welches das Tensid mit der höchsten Produktionsmenge darstellte, die beiden nichtionischen Tenside Alkylglucamide (AG) und Alkylpolyglucoside (APG), ebenso wie das amphotere Tensid Cocamidopropylbetain (CAPB). Außerdem wurde der polymere Farbübertragungsinhibitor Polyvinylpyrrolidon (PVP) untersucht.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il termine cloud ha origine dal mondo delle telecomunicazioni quando i provider iniziarono ad utilizzare servizi basati su reti virtuali private (VPN) per la comunicazione dei dati. Il cloud computing ha a che fare con la computazione, il software, l’accesso ai dati e servizi di memorizzazione in modo tale che l’utente finale non abbia idea della posizione fisica dei dati e la configurazione del sistema in cui risiedono. Il cloud computing è un recente trend nel mondo IT che muove la computazione e i dati lontano dai desktop e dai pc portatili portandoli in larghi data centers. La definizione di cloud computing data dal NIST dice che il cloud computing è un modello che permette accesso di rete on-demand a un pool condiviso di risorse computazionali che può essere rapidamente utilizzato e rilasciato con sforzo di gestione ed interazione con il provider del servizio minimi. Con la proliferazione a larga scala di Internet nel mondo le applicazioni ora possono essere distribuite come servizi tramite Internet; come risultato, i costi complessivi di questi servizi vengono abbattuti. L’obbiettivo principale del cloud computing è utilizzare meglio risorse distribuite, combinarle assieme per raggiungere un throughput più elevato e risolvere problemi di computazione su larga scala. Le aziende che si appoggiano ai servizi cloud risparmiano su costi di infrastruttura e mantenimento di risorse computazionali poichè trasferiscono questo aspetto al provider; in questo modo le aziende si possono occupare esclusivamente del business di loro interesse. Mano a mano che il cloud computing diventa più popolare, vengono esposte preoccupazioni riguardo i problemi di sicurezza introdotti con l’utilizzo di questo nuovo modello. Le caratteristiche di questo nuovo modello di deployment differiscono ampiamente da quelle delle architetture tradizionali, e i meccanismi di sicurezza tradizionali risultano inefficienti o inutili. Il cloud computing offre molti benefici ma è anche più vulnerabile a minacce. Ci sono molte sfide e rischi nel cloud computing che aumentano la minaccia della compromissione dei dati. Queste preoccupazioni rendono le aziende restie dall’adoperare soluzioni di cloud computing, rallentandone la diffusione. Negli anni recenti molti sforzi sono andati nella ricerca sulla sicurezza degli ambienti cloud, sulla classificazione delle minacce e sull’analisi di rischio; purtroppo i problemi del cloud sono di vario livello e non esiste una soluzione univoca. Dopo aver presentato una breve introduzione sul cloud computing in generale, l’obiettivo di questo elaborato è quello di fornire una panoramica sulle vulnerabilità principali del modello cloud in base alle sue caratteristiche, per poi effettuare una analisi di rischio dal punto di vista del cliente riguardo l’utilizzo del cloud. In questo modo valutando i rischi e le opportunità un cliente deve decidere se adottare una soluzione di tipo cloud. Alla fine verrà presentato un framework che mira a risolvere un particolare problema, quello del traffico malevolo sulla rete cloud. L’elaborato è strutturato nel modo seguente: nel primo capitolo verrà data una panoramica del cloud computing, evidenziandone caratteristiche, architettura, modelli di servizio, modelli di deployment ed eventuali problemi riguardo il cloud. Nel secondo capitolo verrà data una introduzione alla sicurezza in ambito informatico per poi passare nello specifico alla sicurezza nel modello di cloud computing. Verranno considerate le vulnerabilità derivanti dalle tecnologie e dalle caratteristiche che enucleano il cloud, per poi passare ad una analisi dei rischi. I rischi sono di diversa natura, da quelli prettamente tecnologici a quelli derivanti da questioni legali o amministrative, fino a quelli non specifici al cloud ma che lo riguardano comunque. Per ogni rischio verranno elencati i beni afflitti in caso di attacco e verrà espresso un livello di rischio che va dal basso fino al molto alto. Ogni rischio dovrà essere messo in conto con le opportunità che l’aspetto da cui quel rischio nasce offre. Nell’ultimo capitolo verrà illustrato un framework per la protezione della rete interna del cloud, installando un Intrusion Detection System con pattern recognition e anomaly detection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A simple dependency between contact angle θ and velocity or surface tension has been predicted for the wetting and dewetting behavior of simple liquids. According to the hydrodynamic theory, this dependency was described by Cox and Voinov as θ ∼ Ca^(1/3) (Ca: Capillary number). For more complex liquids like surfactant solutions, this prediction is not directly given.rnHere I present a rotating drum setup for studying wetting/dewetting processes of surfactant solutions on the basis of velocity-dependent contact angle measurements. With this new setup I showed that surfactant solutions do not follow the predicted Cox-Voinov relation, but showed a stronger contact angle dependency on surface tension. All surfactants independent of their charge showed this difference from the prediction so that electrostatic interactions as a reason could be excluded. Instead, I propose the formation of a surface tension gradient close to the three-phase contact line as the main reason for the strong contact angle decrease with increasing surfactant concentration. Surface tension gradients are not only formed locally close to the three-phase contact line, but also globally along the air-liquid interface due to the continuous creation/destruction of the interface by the drum moving out of/into the liquid. By systematically hindering the equilibration routes of the global gradient along the interface and/or through the bulk, I was able to show that the setup geometry is also important for the wetting/dewetting of surfactant solutions. Further, surface properties like roughness or chemical homogeneity of the wetted/dewetted substrate influence the wetting/dewetting behavior of the liquid, i. e. the three-phase contact line is differently pinned on rough/smooth or homogeneous/inhomogeneous surfaces. Altogether I showed that the wetting/dewetting of surfactant solutions did not depend on the surfactant type (anionic, cationic, or non-ionic) but on the surfactant concentration and strength, the setup geometry, and the surface properties.rnSurfactants do not only influence the wetting/dewetting behavior of liquids, but also the impact behavior of drops on free-standing films or solutions. In a further part of this work, I dealt with the stability of the air cushion between drop and film/solution. To allow coalescence between drop and substrate, the air cushion has to vanish. In the presence of surfactants, the vanishing of the air is slowed down due to a change in the boundary condition from slip to no-slip, i. e. coalescence is suppressed or slowed down in the presence of surfactant.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present work belongs to the PRANA project, the first extensive field campaign of observation of atmospheric emission spectra covering the Far InfraRed spectral region, for more than two years. The principal deployed instrument is REFIR-PAD, a Fourier transform spectrometer used by us to study Antarctic cloud properties. A dataset covering the whole 2013 has been analyzed and, firstly, a selection of good quality spectra is performed, using, as thresholds, radiance values in few chosen spectral regions. These spectra are described in a synthetic way averaging radiances in selected intervals, converting them into BTs and finally considering the differences between each pair of them. A supervised feature selection algorithm is implemented with the purpose to select the features really informative about the presence, the phase and the type of cloud. Hence, training and test sets are collected, by means of Lidar quick-looks. The supervised classification step of the overall monthly datasets is performed using a SVM. On the base of this classification and with the help of Lidar observations, 29 non-precipitating ice cloud case studies are selected. A single spectrum, or at most an average over two or three spectra, is processed by means of the retrieval algorithm RT-RET, exploiting some main IR window channels, in order to extract cloud properties. Retrieved effective radii and optical depths are analyzed, to compare them with literature studies and to evaluate possible seasonal trends. Finally, retrieval output atmospheric profiles are used as inputs for simulations, assuming two different crystal habits, with the aim to examine our ability to reproduce radiances in the FIR. Substantial mis-estimations are found for FIR micro-windows: a high variability is observed in the spectral pattern of simulation deviations from measured spectra and an effort to link these deviations to cloud parameters has been performed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Liquids and gasses form a vital part of nature. Many of these are complex fluids with non-Newtonian behaviour. We introduce a mathematical model describing the unsteady motion of an incompressible polymeric fluid. Each polymer molecule is treated as two beads connected by a spring. For the nonlinear spring force it is not possible to obtain a closed system of equations, unless we approximate the force law. The Peterlin approximation replaces the length of the spring by the length of the average spring. Consequently, the macroscopic dumbbell-based model for dilute polymer solutions is obtained. The model consists of the conservation of mass and momentum and time evolution of the symmetric positive definite conformation tensor, where the diffusive effects are taken into account. In two space dimensions we prove global in time existence of weak solutions. Assuming more regular data we show higher regularity and consequently uniqueness of the weak solution. For the Oseen-type Peterlin model we propose a linear pressure-stabilized characteristics finite element scheme. We derive the corresponding error estimates and we prove, for linear finite elements, the optimal first order accuracy. Theoretical error of the pressure-stabilized characteristic finite element scheme is confirmed by a series of numerical experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The role of the binary nucleation of sulfuric acid in aerosol formation and its implications for global warming is one of the fundamental unsettled questions in atmospheric chemistry. We have investigated the thermodynamics of sulfuric acid hydration using ab initio quantum mechanical methods. For H2SO4(H2O)n where n = 1–6, we used a scheme combining molecular dynamics configurational sampling with high-level ab initio calculations to locate the global and many low lying local minima for each cluster size. For each isomer, we extrapolated the Møller–Plesset perturbation theory (MP2) energies to their complete basis set (CBS) limit and added finite temperature corrections within the rigid-rotor-harmonic-oscillator (RRHO) model using scaled harmonic vibrational frequencies. We found that ionic pair (HSO4–·H3O+)(H2O)n−1clusters are competitive with the neutral (H2SO4)(H2O)n clusters for n ≥ 3 and are more stable than neutral clusters for n ≥ 4 depending on the temperature. The Boltzmann averaged Gibbs free energies for the formation of H2SO4(H2O)n clusters are favorable in colder regions of the troposphere (T = 216.65–273.15 K) for n = 1–6, but the formation of clusters with n ≥ 5 is not favorable at higher (T > 273.15 K) temperatures. Our results suggest the critical cluster of a binary H2SO4–H2O system must contain more than one H2SO4 and are in concert with recent findings(1) that the role of binary nucleation is small at ambient conditions, but significant at colder regions of the troposphere. Overall, the results support the idea that binary nucleation of sulfuric acid and water cannot account for nucleation of sulfuric acid in the lower troposphere.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Particulate matter (PM) emissions standards set by the US Environmental Protection Agency (EPA) have become increasingly stringent over the years. The EPA regulation for PM in heavy duty diesel engines has been reduced to 0.01 g/bhp-hr for the year 2010. Heavy duty diesel engines make use of an aftertreatment filtration device, the Diesel Particulate Filter (DPF). DPFs are highly efficient in filtering PM (known as soot) and are an integral part of 2010 heavy duty diesel aftertreatment system. PM is accumulated in the DPF as the exhaust gas flows through it. This PM needs to be removed by oxidation periodically for the efficient functioning of the filter. This oxidation process is also known as regeneration. There are 2 types of regeneration processes, namely active regeneration (oxidation of PM by external means) and passive oxidation (oxidation of PM by internal means). Active regeneration occurs typically in high temperature regions, about 500 - 600 °C, which is much higher than normal diesel exhaust temperatures. Thus, the exhaust temperature has to be raised with the help of external devices like a Diesel Oxidation Catalyst (DOC) or a fuel burner. The O2 oxidizes PM producing CO2 as oxidation product. In passive oxidation, one way of regeneration is by the use of NO2. NO2 oxidizes the PM producing NO and CO2 as oxidation products. The passive oxidation process occurs at lower temperatures (200 - 400 °C) in comparison to the active regeneration temperatures. Generally, DPF substrate walls are washcoated with catalyst material to speed up the rate of PM oxidation. The catalyst washcoat is observed to increase the rate of PM oxidation. The goal of this research is to develop a simple mathematical model to simulate the PM depletion during the active regeneration process in a DPF (catalyzed and non-catalyzed). A simple, zero-dimensional kinetic model was developed in MATLAB. Experimental data required for calibration was obtained by active regeneration experiments performed on PM loaded mini DPFs in an automated flow reactor. The DPFs were loaded with PM from the exhaust of a commercial heavy duty diesel engine. The model was calibrated to the data obtained from active regeneration experiments. Numerical gradient based optimization techniques were used to estimate the kinetic parameters of the model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern cloud-based applications and infrastructures may include resources and services (components) from multiple cloud providers, are heterogeneous by nature and require adjustment, composition and integration. The specific application requirements can be met with difficulty by the current static predefined cloud integration architectures and models. In this paper, we propose the Intercloud Operations and Management Framework (ICOMF) as part of the more general Intercloud Architecture Framework (ICAF) that provides a basis for building and operating a dynamically manageable multi-provider cloud ecosystem. The proposed ICOMF enables dynamic resource composition and decomposition, with a main focus on translating business models and objectives to cloud services ensembles. Our model is user-centric and focuses on the specific application execution requirements, by leveraging incubating virtualization techniques. From a cloud provider perspective, the ecosystem provides more insight into how to best customize the offerings of virtualized resources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective. To measure the demand for primary care and its associated factors by building and estimating a demand model of primary care in urban settings.^ Data source. Secondary data from 2005 California Health Interview Survey (CHIS 2005), a population-based random-digit dial telephone survey, conducted by the UCLA Center for Health Policy Research in collaboration with the California Department of Health Services, and the Public Health Institute between July 2005 and April 2006.^ Study design. A literature review was done to specify the demand model by identifying relevant predictors and indicators. CHIS 2005 data was utilized for demand estimation.^ Analytical methods. The probit regression was used to estimate the use/non-use equation and the negative binomial regression was applied to the utilization equation with the non-negative integer dependent variable.^ Results. The model included two equations in which the use/non-use equation explained the probability of making a doctor visit in the past twelve months, and the utilization equation estimated the demand for primary conditional on at least one visit. Among independent variables, wage rate and income did not affect the primary care demand whereas age had a negative effect on demand. People with college and graduate educational level were associated with 1.03 (p < 0.05) and 1.58 (p < 0.01) more visits, respectively, compared to those with no formal education. Insurance was significantly and positively related to the demand for primary care (p < 0.01). Need for care variables exhibited positive effects on demand (p < 0.01). Existence of chronic disease was associated with 0.63 more visits, disability status was associated with 1.05 more visits, and people with poor health status had 4.24 more visits than those with excellent health status. ^ Conclusions. The average probability of visiting doctors in the past twelve months was 85% and the average number of visits was 3.45. The study emphasized the importance of need variables in explaining healthcare utilization, as well as the impact of insurance, employment and education on demand. The two-equation model of decision-making, and the probit and negative binomial regression methods, was a useful approach to demand estimation for primary care in urban settings.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this work is twofold: first, to develop a process to automatically create parametric models of the aorta that can adapt to any possible intraoperative deformation of the vessel. Second, it intends to provide the tools needed to perform this deformation in real time, by means of a non-rigid registration method. This dynamically deformable model will later be used in a VR-based surgery guidance system for aortic catheterism procedures, showing the vessel changes in real time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La presente tesis propone un nuevo método de cartografía de ensayos no destructivos en edificios históricos mediante el uso de técnicas basadas en SIG. Primeramente, se define el método por el cual es posible elaborar y convertir una cartografía 3D basada en nubes de puntos de un elemento arquitectónico obtenida mediante fotogrametría, en cartografía raster y vectorial, legible por los sistemas SIG mediante un sistema de coordenadas particular que referencian cada punto de la nube obtenida por fotogrametría. A esta cartografía inicial la denominaremos cartografía base. Después, se define el método por el cual los puntos donde se realiza un ensayo NDT se referencian al sistema de coordenadas del plano base, lo que permite la generación de cartografías de los ensayos referenciadas y la posibilidad de obtener sobre un mismo plano base diferentes datos de múltiples ensayos. Estas nuevas cartografías las denominaremos cartografías de datos, y se demostrará la utilidad de las mismas en el estudio del deterioro y la humedad. Se incluirá el factor tiempo en las cartografías, y se mostrará cómo este nuevo hecho posibilita el trabajo interdisciplinar en la elaboración del diagnóstico. Finalmente, se generarán nuevas cartografías inéditas hasta entonces consistentes en la combinación de diferentes cartografías de datos con la misma planimetría base. Estas nuevas cartografías, darán pie a la obtención de lo que se ha definido como mapas de isograma de humedad, mapa de isograma de salinidad, factor de humedad, factor de evaporación, factor de salinidad y factor de degradación del material. Mediante este sistema se facilitará una mejor visión del conjunto de los datos obtenidos en el estudio del edificio histórico, lo que favorecerá la correcta y rigurosa interpretación de los datos para su posterior restauración. ABSTRACT This research work proposes a new mapping method of non-destructive testing in historical buildings, by using techniques based on GIS. First of all, the method that makes it possible to produce and convert a 3D map based on cloud points from an architectural element obtained by photogrammetry, are defined, as raster and vector, legible by GIS mapping systems using a particular coordinate system that will refer each cloud point obtained by photogrammetry. This initial mapping will be named base planimetry. Afterwards, the method by which the points where the NDT test is performed are referenced to the coordinate system of the base plane , which allows the generation of maps of the referenced tests and the possibility of obtaining different data from multiple tests on the same base plane. These new maps will be named mapping data and their usefulness will be demonstrated in the deterioration and moisture study. The time factor in maps will be included, and how this new fact will enable the interdisciplinary work in the elaboration of the diagnosis will be proved. Finally, new maps (unpublished until now) will be generated by combining different mapping from the same planimetry data base. These new maps will enable us to obtain what have been called isograma moisture maps, isograma salinity- maps, humidity factor, evaporation factor, salinity factor and the material degradation factor. This system will provide a better vision of all data obtained in the study of historical buildings , and will ease the proper and rigorous data interpretation for its subsequent restoration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O uso de pesticidas levou ao aumento da produtividade e qualidade dos produtos agrícolas, porém o seu uso acarreta na intoxicação dos seres vivos pela ingestão gradativa de seus resíduos que contaminam o solo, a água e os alimentos. Dessa forma, há a necessidade do monitoramento constante de suas concentrações nos compartimentos ambientais. Para isto, busca-se o desenvolvimento de métodos de extração e enriquecimento de forma rápida, com baixo custo, gerando um baixo volume de resíduos, contribuindo com a química verde. Dentre estes métodos destacam-se a extração por banho de ultrassom e a extração por ponto nuvem. Após o procedimento de extração, o extrato obtido pode ser analisado por técnicas de Cromatografia a Líquido de Alta Eficiência (HPLC) e a Cromatografia por Injeção Sequencial (SIC), empregando fases estacionárias modernas, tais como as monolíticas e as partículas superficialmente porosas. O emprego de SIC com coluna monolítica (C18, 50 x 4,6 mm) e empacotada com partículas superficialmente porosas (C18, 30 x 4,6 mm, tamanho de partícula 2,7 µm) foi estudado para separação de simazina (SIM) e atrazina (ATR), e seus metabólitos, desetilatrazina (DEA), desisopropilatrazina (DIA) e hidroxiatrazina (HAT). A separação foi obtida por eluição passo-a-passo, com fases móveis compostas de acetonitrila (ACN) e tampão Acetato de Amônio/Ácido acético (NH4Ac/HAc) 2,5 mM pH 4,2. A separação na coluna monolítica foi realizada com duas fases móveis: MP1= 15:85 (v v-1) ACN:NH4Ac/HAc e MP2= 35:65 (v v-1) ACN:NH4Ac/HAc a uma vazão de 35 µL s-1. A separação na coluna com partículas superficialmente porosas foi efetivada com as fases móveis MP1= 13:87 (v v-1) ACN: NH4Ac/HAc e MP2= 35:65 (v v-1) ACN:NH4Ac/HAc à vazão de 8 µL s-1. A extração por banho de ultrassom em solo fortificado com os herbicidas (100 e 1000 µg kg-1) resultou em recuperações entre 42 e 160%. A separação de DEA, DIA, HAT, SIM e ATR empregando HPLC foi obtida por um gradiente linear de 13 a 35% para a coluna monolítica e de 10 a 35% ACN na coluna com partículas superficialmente porosas, sendo a fase aquosa constituída por tampão NH4Ac/HAc 2,5 mM pH 4,2. Em ambas as colunas a vazão foi de 1,5 mL min-1 e o tempo de análise 15 min. A extração por banho de ultrassom das amostras de solo com presença de ATR, fortificadas com concentrações de 250 a 1000 µg kg-1, proporcionou recuperações entre 40 e 86%. A presença de ATR foi confirmada por espectrometria de massas. Foram realizados estudos de fortificação com ATR e SIM em amostras de água empregando a extração por ponto nuvem com o surfactante Triton-X114. A separação empregando HPLC foi obtida por um gradiente linear de 13 a 90% de ACN para a coluna monolítica e de 10 a 90% de ACN para a coluna empacotada, sempre em tampão NH4Ac/HAc 2,5 mM pH 4,2. Em ambas as colunas a vazão foi de 1,5 mL min-1 e o tempo de análise 16 min. Fortificações entre 1 e 50 µg L-1 resultaram em recuperações entre 65 e 132%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The stabilization of reduced graphene oxide (RGO) sheets in aqueous dispersion using a wide range of surfactants of anionic, non-ionic and zwitterionic type has been investigated and compared under different conditions of pH, surfactant and RGO concentration, or sheet size. The observed differences in the performance of the surfactants were rationalized on the basis of their chemical structure (e.g., alkylic vs. aromatic hydrophobic tail or sulfonic vs. carboxylic polar head), thus providing a reference framework in the selection of appropriate surfactants for the processing of RGO suspensions towards particular purposes. RGO-surfactant composite paper-like films were also prepared through vacuum filtration of the corresponding mixed dispersions and their main characteristics were investigated. The composite paper-like films were also electrochemically characterized. Those prepared with two specific surfactants exhibited a high capacitance in relation to their surfactant-free counterpart.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Previous research shows that correlations tend to increase in magnitude when individuals are aggregated across groups. This suggests that uncorrelated constellations of personality variables (such as the primary scales of Extraversion and Neuroticism) may display much higher correlations in aggregate factor analysis. We hypothesize and report that individual level factor analysis can be explained in terms of Giant Three (or Big Five) descriptions of personality, whereas aggregate level factor analysis can be explained in terms of Gray's physiological based model. Although alternative interpretations exist, aggregate level factor analysis may correctly identify the basis of an individual's personality as a result of better reliability of measures due to aggregation. We discuss the implications of this form of analysis in terms of construct validity, personality theory, and its applicability in general. Copyright (C) 2003 John Wiley Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper summarises test results that were used to validate a model and scale-up procedure of the high pressure grinding roll (HPGR) which was developed at the JKMRC by Morrell et al. [Morrell, Lim, Tondo, David,1996. Modelling the high pressure grinding rolls. In: Mining Technology Conference, pp. 169-176.]. Verification of the model is based on results from four data sets that describe the performance of three industrial scale units fitted with both studded and smooth roll surfaces. The industrial units are currently in operation within the diamond mining industry and are represented by De Beers, BHP Billiton and Rio Tinto. Ore samples from the De Beers and BHP Billiton operations were sent to the JKMRC for ore characterisation and HPGR laboratory-scale tests. Rio Tinto contributed an historical data set of tests completed during a previous research project. The results conclude that the modelling of the HPGR process has matured to a point where the model may be used to evaluate new and to optimise existing comminution circuits. The model prediction of product size distribution is good and has been found to be strongly dependent of the characteristics of the material being tested. The prediction of throughput and corresponding power draw (based on throughput) is sensitive to inconsistent gap/diameter ratios observed between laboratory-scale tests and full-scale operations. (C) 2004 Elsevier Ltd. All rights reserved.