927 resultados para Non-ionic surfactant. Cloud point. Flory-Huggins model. UNIQUAC model. NRTL model
Resumo:
Résumé En Suisse, les programmes de désaccoutumance au tabac se réfèrent généralement au modèle de préparation au changement de Prochaska et DiClemente (1983), Les patients atteints de maladies somatiques liées au tabagisme comme les pathologies cardiovasculaires ou pulmonaires accèdent facilement à ces programmes, contrairement aux patients présentant une dépendance à des drogues illicites. La prévalence de fumeurs dans cette population est pourtant élevée et les problèmes engendrés par le tabac sont importants, non seulement d'un point de vue individuel mais aussi en terme de santé publique. Il est par conséquent intéressant d'évaluer la motivation concernant la désaccoutumance au tabac de patients toxicomanes entreprenant un sevrage de drogues illicites. Dans cette étude, nous avons évalué les stades de préparation au changement concernant la dépendance au tabac chez 100 patients toxicomanes hospitalisés sur un mode volontaire dans le cadre d'un programme de sevrage à des drogues illégales. L'évaluation s'est faite à l'aide d'un auto-questionnaire dont les résultats indiquent qu'une minorité de patients sont décidés à interrompre la consommation de tabac. En effet, seul 15% des patients se trouvaient aux stades de contemplation ou de décision. De plus, 93% des sujets considéraient l'arrêt du tabac comme difficile ou très difficile. Ces données montrent qu'il existe un décalage important entre la motivation relative au sevrage de drogues illégales et la motivation liées à l'arrêt du tabac. En effet, malgré leur motivation élevée pour se sevrer de drogues illicites, la proportion de patients restant au stade de précontemplation concernant la désaccoutumance au tabac reste élevée. Diverses hypothèses permettent d'expliquer ces résultats, notamment la perception que la désaccoutumance au tabac est plus difficile à réaliser que le sevrage de substances illicites. Abstract Nicotine cessation programmes in Switzerland, which are commonly based on the stage of change model of Prochaska and DiClemente (1983), are rarely offered to patients with illicit drug dependence. This stands in contrast to the high smoking rates and the heavy burden of tobacco-related problems in these patients. The stage of change was therefore assessed by self-administered questionnaire in 100 inpatients attending an illegal drug withdrawal programme. Only 15% of the patients were in the contemplation or decision stage. 93% considered smoking cessation to be difficult or very difficult. These data show a discrepancy between the motivation to change illegal drug consumption habits and the motivation for smoking cessation. The high pro-portion of patients remaining in the precontemplation stage for smoking cessation, in spite of their motivation for illicit drug detoxification, may be due to the perception that cessation of smoking is more difficult than illicit drug abuse cessation.
Resumo:
The non-obese diabetic (NOD) mouse is a model for the study of insulin-dependent diabetes mellitus (IDDM). Recently transgenic NOD mice have been derived (NOD-E) that express the major histocompatibility complex (MHC) class II I-E molecule. NOD-E do not become diabetic and show negligible pancreatic insulitis. The possibility pertained that NOD-E mice are protected from disease by a process of T-cell deletion or anergy. This paper describes our attempts to discover whether this was so, by comparing NOD and NOD-E mouse T-cell receptor V beta usage. Splenocytes and lymph node cells were therefore tested for their ability to proliferate in response to monoclonal anti-V beta antibodies. We were unable to show any consistent differences between NOD and NOD-E responses to the panel of antibodies used. Previously proposed V beta were shown to be unlikely candidates for deletion or anergy. T cells present at low frequency (V beta 5+) in both NOD and NOD-E mice were shown to be as capable of expansion in response to antigenic stimulation as were more frequently expressed V beta. Our data therefore do not support deletion or anergy as mechanisms which could account for the observed disease protection in NOD-E mice.
Resumo:
In this paper, we study dynamical aspects of the two-dimensional (2D) gonihedric spin model using both numerical and analytical methods. This spin model has vanishing microscopic surface tension and it actually describes an ensemble of loops living on a 2D surface. The self-avoidance of loops is parametrized by a parameter ¿. The ¿=0 model can be mapped to one of the six-vertex models discussed by Baxter, and it does not have critical behavior. We have found that allowing for ¿¿0 does not lead to critical behavior either. Finite-size effects are rather severe, and in order to understand these effects, a finite-volume calculation for non-self-avoiding loops is presented. This model, like his 3D counterpart, exhibits very slow dynamics, but a careful analysis of dynamical observables reveals nonglassy evolution (unlike its 3D counterpart). We find, also in this ¿=0 case, the law that governs the long-time, low-temperature evolution of the system, through a dual description in terms of defects. A power, rather than logarithmic, law for the approach to equilibrium has been found.
Resumo:
This paper highlights the role of non-functional information when reusing from a component library. We describe a method for selecting appropriate implementations of Ada packages taking non-functional constraints into account; these constraints model the context of reuse. Constraints take the form of queries using an interface description language called NoFun, which is also used to state non-functional information in Ada packages; query results are trees of implementations, following the import relationships between components. We define two different situations when reusing components, depending whether we take the library being searched as closed or extendible. The resulting tree of implementations can be manipulated by the user to solve ambiguities, to state default behaviours, and by the like. As part of the proposal, we face the problem of computing from code the non-functional information that determines the selection process.
Resumo:
Background: Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results: Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions: Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task.
Resumo:
The aim of this work was to propose two different didactic experiments, which can be used in practical classes of analytical chemistry courses. More flexible experiments related to the theme, giving some options to the instructor are proposed. In this way, the Experiment 1 was divided in two parts. In the first one, the visualization of two distinct phases separation is emphasized: the rich and the poor phases in surfactant. In the second part, the metal pre-concentration (Co as example) is emphasized. The Experiment 2 has three different parts. In the first one, the complex formation is pointed out, in the second one, the pH influence is shown and in the last one, the influence of the complexation time is demonstrated.
Resumo:
Clay is often employed as a catalyst, but quartz impurities can decrease the catalytic efficiency. Fine particles of clay can be purified by flotation. We examined the cationic surfactant hexadecyltrimethylammonium bromide (HTAB), the anionic sodium dodecyl sulfate (SDS) and the non-ionic TRITON X-100 for separating the quartz impurities from clay. Using X-ray diffraction, the separation was monitored for changes in the peaks corresponding to clay and quartz. Cationic surfactant HTAB was most effective in separating the quartz-clay mixture and the selectivity can be explained by internal adsorption of the surfactant onto the clay and external adsorption onto the quartz.
Resumo:
In this work, the interactions between the non-ionic polymer of ethyl(hydroxyethyl)cellulose (EHEC) and mixed anionic surfactant sodium dodecanoate (SDoD)-sodium decanoate (SDeC) in aqueous media, at pH 9.2 (20 mM borate/NaOH buffer) were investigated by electric conductivity and light transmittance measurements at 25 ºC. The parameters of the surfactant to polymer association processes such as the critical aggregation concentration and saturation of the polymer by surfactants were determined from plots of specific conductivity vs total surfactant concentration, [surfactant]tot = [SDoD] + [SDeC]. Through the results was not observed a specific link of polymer with the surfactant, implying therefore a phenomenon only cooperative association.
Resumo:
Most of the applications of airborne laser scanner data to forestry require that the point cloud be normalized, i.e., each point represents height from the ground instead of elevation. To normalize the point cloud, a digital terrain model (DTM), which is derived from the ground returns in the point cloud, is employed. Unfortunately, extracting accurate DTMs from airborne laser scanner data is a challenging task, especially in tropical forests where the canopy is normally very thick (partially closed), leading to a situation in which only a limited number of laser pulses reach the ground. Therefore, robust algorithms for extracting accurate DTMs in low-ground-point-densitysituations are needed in order to realize the full potential of airborne laser scanner data to forestry. The objective of this thesis is to develop algorithms for processing airborne laser scanner data in order to: (1) extract DTMs in demanding forest conditions (complex terrain and low number of ground points) for applications in forestry; (2) estimate canopy base height (CBH) for forest fire behavior modeling; and (3) assess the robustness of LiDAR-based high-resolution biomass estimation models against different field plot designs. Here, the aim is to find out if field plot data gathered by professional foresters can be combined with field plot data gathered by professionally trained community foresters and used in LiDAR-based high-resolution biomass estimation modeling without affecting prediction performance. The question of interest in this case is whether or not the local forest communities can achieve the level technical proficiency required for accurate forest monitoring. The algorithms for extracting DTMs from LiDAR point clouds presented in this thesis address the challenges of extracting DTMs in low-ground-point situations and in complex terrain while the algorithm for CBH estimation addresses the challenge of variations in the distribution of points in the LiDAR point cloud caused by things like variations in tree species and season of data acquisition. These algorithms are adaptive (with respect to point cloud characteristics) and exhibit a high degree of tolerance to variations in the density and distribution of points in the LiDAR point cloud. Results of comparison with existing DTM extraction algorithms showed that DTM extraction algorithms proposed in this thesis performed better with respect to accuracy of estimating tree heights from airborne laser scanner data. On the other hand, the proposed DTM extraction algorithms, being mostly based on trend surface interpolation, can not retain small artifacts in the terrain (e.g., bumps, small hills and depressions). Therefore, the DTMs generated by these algorithms are only suitable for forestry applications where the primary objective is to estimate tree heights from normalized airborne laser scanner data. On the other hand, the algorithm for estimating CBH proposed in this thesis is based on the idea of moving voxel in which gaps (openings in the canopy) which act as fuel breaks are located and their height is estimated. Test results showed a slight improvement in CBH estimation accuracy over existing CBH estimation methods which are based on height percentiles in the airborne laser scanner data. However, being based on the idea of moving voxel, this algorithm has one main advantage over existing CBH estimation methods in the context of forest fire modeling: it has great potential in providing information about vertical fuel continuity. This information can be used to create vertical fuel continuity maps which can provide more realistic information on the risk of crown fires compared to CBH.
Resumo:
La préparation de formulations à libération contrôlée est le domaine des sciences pharmaceutiques qui vise à modifier l’environnement immédiat des principes actifs pour en améliorer l’efficacité et l’innocuité. Cet objectif peut être atteint en modifiant la cinétique de circulation dans le sang ou la distribution dans l’organisme. Le but de ce projet de recherche était d’étudier le profil pharmacocinétique (PK) de différentes formulations liposomales. L’analyse PK, généralement employée pour représenter et prédire les concentrations plasmatiques des médicaments et de leurs métabolites, a été utilisée ici pour caractériser in vivo des formulations sensibles au pH servant à modifier la distribution intracellulaire de principes actifs ainsi que des liposomes destinés au traitement des intoxications médicamenteuses. Dans un premier temps, la PK d’un copolymère sensible au pH, à base de N-isopropylacrylamide (NIPAM) et d’acide méthacrylique (MAA) a été étudiée. Ce dernier, le p(NIPAM-co-MAA) est utilisé dans notre laboratoire pour la fabrication de liposomes sensibles au pH. L’étude de PK conduite sur les profils de concentrations sanguines de différents polymères a défini les caractéristiques influençant la circulation des macromolécules dans l’organisme. La taille des molécules, leur point de trouble ainsi que la présence d’un segment hydrophobe à l’extrémité des chaînes se sont avérés déterminants. Le seuil de filtration glomérulaire du polymère a été évalué à 32 000 g/mol. Finalement, l’analyse PK a permis de s’assurer que les complexes formés par la fixation du polymère à la surface des liposomes restaient stables dans le sang, après injection par voie intraveineuse. Ces données ont établi qu’il était possible de synthétiser un polymère pouvant être adéquatement éliminé par filtration rénale et que les liposomes sensibles au pH préparés avec celui-ci demeuraient intacts dans l’organisme. En second lieu, l’analyse PK a été utilisée dans le développement de liposomes possédant un gradient de pH transmembranaire pour le traitement des intoxications médicamenteuses. Une formulation a été développée et optimisée in vitro pour capturer un médicament modèle, le diltiazem (DTZ). La formulation liposomale s’est avérée 40 fois plus performante que les émulsions lipidiques utilisées en clinique. L’analyse PK des liposomes a permis de confirmer la stabilité de la formulation in vivo et d’analyser l’influence des liposomes sur la circulation plasmatique du DTZ et de son principal métabolite, le desacétyldiltiazem (DAD). Il a été démontré que les liposomes étaient capables de capturer et de séquestrer le principe actif dans la circulation sanguine lorsque celui-ci était administré, par la voie intraveineuse. L’injection des liposomes 2 minutes avant l’administration du DTZ augmentait significativement l’aire sous la courbe du DTZ et du DAD tout en diminuant leur clairance plasmatique et leur volume de distribution. L’effet de ces modifications PK sur l’activité pharmacologique du médicament a ensuite été évalué. Les liposomes ont diminué l’effet hypotenseur du principe actif administré en bolus ou en perfusion sur une période d’une heure. Au cours de ces travaux, l’analyse PK a servi à établir la preuve de concept que des liposomes possédant un gradient de pH transmembranaire pouvaient modifier la PK d’un médicament cardiovasculaire et en diminuer l’activité pharmacologique. Ces résultats serviront de base pour le développement de la formulation destinée au traitement des intoxications médicamenteuses. Ce travail souligne la pertinence d’utiliser l’analyse PK dans la mise au point de vecteurs pharmaceutiques destinés à des applications variées. À ce stade de développement, l’aspect prédictif de l’analyse n’a pas été exploité, mais le côté descriptif a permis de comparer adéquatement diverses formulations et de tirer des conclusions pertinentes quant à leur devenir dans l’organisme.
Resumo:
Cette thèse concerne l’étude de phase de séparation de deux polymères thermosensibles connus-poly(N-isopropylacylamide) (PNIPAM) et poly(2-isopropyl-2-oxazoline) (PIPOZ). Parmi des études variées sur ces deux polymères, il y a encore deux parties de leurs propriétés thermiques inexplicites à être étudiées. Une partie concerne l’effet de consolvant de PNIPAM dans l’eau et un autre solvant hydromiscible. L’autre est l’effet de propriétés de groupes terminaux de chaînes sur la séparation de phase de PIPOZ. Pour ce faire, nous avons d’abord étudié l’effet de l’architecture de chaînes sur l’effet de cosolvant de PNIPAMs dans le mélange de méthanol/eau en utilisant un PNIPAM en étoile avec 4 branches et un PNIPAM cyclique comme modèles. Avec PNIPAM en étoile, l’adhérence de branches PNIPAM de à un cœur hydrophobique provoque une réduction de Tc (la température du point de turbidité) et une enthalpie plus faible de la transition de phase. En revanche, la Tc de PNIPAM en étoile dépend de la masse molaire de polymère. La coopérativité de déhydratation diminue pour PNIPAM en étoile et PNIPAM cyclique à cause de la limite topologique. Une étude sur l’influence de concentration en polymère sur l’effet de cosolvant de PNIPAM dans le mélange méthanol/eau a montré qu’une séparation de phase liquide-liquide macroscopique (MLLPS) a lieu pour une solution de PNIPAM dans le mélange méthanol/eau avec la fraction molaire de méthanol entre 0.127 et 0.421 et la concentration en PNIPAM est constante à 10 g.L-1. Après deux jours d’équilibration à température ambiante, la suspension turbide de PNIPAM dans le mélange méthanol/eau se sépare en deux phases dont une phase possède beaucoup plus de PNIPAM que l’autre. Un diagramme de phase qui montre la MLLPS pour le mélange PNIPAM/eau/méthanol a été établi à base de données expérimentales. La taille et la morphologie de gouttelettes dans la phase riche en polymère condensée dépendent de la fraction molaire de méthanol. Parce que la présence de méthanol influence la tension de surface des gouttelettes liquides, un équilibre lent de la séparation de phase pour PNIPAM/eau/méthanol système a été accéléré et une séparation de phase liquide-liquide macroscopique apparait. Afin d’étudier l’effet de groupes terminaux sur les propriétés de solution de PIPOZ, deux PIPOZs téléchéliques avec groupe perfluorodécanyle (FPIPOZ) ou groupe octadécyle (C18PIPOZ) comme extrémités de chaîne ont été synthétisés. Les valeurs de Tc des polymères téléchéliques ont beaucoup diminué par rapport à celle de PIPOZ. Des micelles stables se forment dans des solutions aqueuses de polymères téléchéliques. La micellization et la séparation de phase de ces polymères dans l’eau ont été étudiées. La séparation de phase de PIPOZs téléchéliques suit le mécanisme de MLLPS. Des différences en tailles de gouttelettes formées à l’intérieur de solutions de deux polymères ont été observées. Pour étudier profondément les différences dans le comportement d’association entre deux polymères téléchéliques, les intensités des signaux de polymères correspondants et les temps de relaxation T1, T2 ont été mesurés. Des valeurs de T2 de protons correspondants aux IPOZs sont plus hautes.
Resumo:
The self-assembly into wormlike micelles of a poly(ethylene oxide)-b-poly(propylene oxide)-b-poly(ethylene oxide) triblock copolymer Pluronic P84 in aqueous salt solution (2 M NaCl) has been studied by rheology, small-angle X-ray and neutron scattering (SAXS/SANS), and light scattering. Measurements of the flow curves by controlled stress rheometry indicated phase separation under flow. SAXS on solutions subjected to capillary flow showed alignment of micelles at intermediate shear rates, although loss of alignment was observed for high shear rates. For dilute solutions, SAXS and static light scattering data on unaligned samples could be superposed over three decades in scattering vector, providing unique information on the wormlike micelle structure over several length scales. SANS data provided information on even shorter length scales, in particular, concerning "blob" scattering from the micelle corona. The data could be modeled based on a system of semiflexible self-avoiding cylinders with a circular cross-section, as described by the wormlike chain model with excluded volume interactions. The micelle structure was compared at two temperatures close to the cloud point (47 degrees C). The micellar radius was found not to vary with temperature in this region, although the contour length increased with increasing temperature, whereas the Kuhn length decreased. These variations result in an increase of the low-concentration radius of gyration with increasing temperature. This was consistent with dynamic light scattering results, and, applying theoretical results from the literature, this is in agreement with an increase in endcap energy due to changes in hydration of the poly(ethylene oxide) blocks as the temperature is increased.
Resumo:
The phase diagram of a series of poly(1,2-octylene oxide)-poly(ethylene oxide) (POO-PEO) diblock copolymers is determined by small-angle X-ray scattering. The Flory-Huggins interaction parameter was measured by small-angle neutron scattering. The phase diagram is highly asymmetric due to large conformational asymmetry that results from the hexyl side chains in the POO block. Non-lamellar phases (hexagonal and gyroid) are observed near f(PEO) = 0.5, and the lamellar phase is observed for f(PEO) >= 0.5.
Resumo:
Monte Carlo field-theoretic simulations (MCFTS) are performed on melts of symmetric diblock copolymer for invariant polymerization indexes extending down to experimentally relevant values of N̅ ∼ 10^4. The simulations are performed with a fluctuating composition field, W_−(r), and a pressure field, W_+(r), that follows the saddle-point approximation. Our study focuses on the disordered-state structure function, S(k), and the order−disorder transition (ODT). Although shortwavelength fluctuations cause an ultraviolet (UV) divergence in three dimensions, this is readily compensated for with the use of an effective Flory−Huggins interaction parameter, χ_e. The resulting S(k) matches the predictions of renormalized one-loop (ROL) calculations over the full range of χ_eN and N̅ examined in our study, and agrees well with Fredrickson−Helfand (F−H) theory near the ODT. Consistent with the F−H theory, the ODT is discontinuous for finite N̅ and the shift in (χ_eN)_ODT follows the predicted N̅^−1/3 scaling over our range of N̅.
Resumo:
Multi-model ensembles are frequently used to assess understanding of the response of ozone and methane lifetime to changes in emissions of ozone precursors such as NOx, VOCs (volatile organic compounds) and CO. When these ozone changes are used to calculate radiative forcing (RF) (and climate metrics such as the global warming potential (GWP) and global temperature-change potential (GTP)) there is a methodological choice, determined partly by the available computing resources, as to whether the mean ozone (and methane) concentration changes are input to the radiation code, or whether each model's ozone and methane changes are used as input, with the average RF computed from the individual model RFs. We use data from the Task Force on Hemispheric Transport of Air Pollution source–receptor global chemical transport model ensemble to assess the impact of this choice for emission changes in four regions (East Asia, Europe, North America and South Asia). We conclude that using the multi-model mean ozone and methane responses is accurate for calculating the mean RF, with differences up to 0.6% for CO, 0.7% for VOCs and 2% for NOx. Differences of up to 60% for NOx 7% for VOCs and 3% for CO are introduced into the 20 year GWP. The differences for the 20 year GTP are smaller than for the GWP for NOx, and similar for the other species. However, estimates of the standard deviation calculated from the ensemble-mean input fields (where the standard deviation at each point on the model grid is added to or subtracted from the mean field) are almost always substantially larger in RF, GWP and GTP metrics than the true standard deviation, and can be larger than the model range for short-lived ozone RF, and for the 20 and 100 year GWP and 100 year GTP. The order of averaging has most impact on the metrics for NOx, as the net values for these quantities is the residual of the sum of terms of opposing signs. For example, the standard deviation for the 20 year GWP is 2–3 times larger using the ensemble-mean fields than using the individual models to calculate the RF. The source of this effect is largely due to the construction of the input ozone fields, which overestimate the true ensemble spread. Hence, while the average of multi-model fields are normally appropriate for calculating mean RF, GWP and GTP, they are not a reliable method for calculating the uncertainty in these fields, and in general overestimate the uncertainty.