9 resultados para BINARY-MIXTURES
em Helda - Digital Repository of University of Helsinki
Resumo:
The number of drug substances in formulation development in the pharmaceutical industry is increasing. Some of these are amorphous drugs and have glass transition below ambient temperature, and thus they are usually difficult to formulate and handle. One reason for this is the reduced viscosity, related to the stickiness of the drug, that makes them complicated to handle in unit operations. Thus, the aim in this thesis was to develop a new processing method for a sticky amorphous model material. Furthermore, model materials were characterised before and after formulation, using several characterisation methods, to understand more precisely the prerequisites for physical stability of amorphous state against crystallisation. The model materials used were monoclinic paracetamol and citric acid anhydrate. Amorphous materials were prepared by melt quenching or by ethanol evaporation methods. The melt blends were found to have slightly higher viscosity than the ethanol evaporated materials. However, melt produced materials crystallised more easily upon consecutive shearing than ethanol evaporated materials. The only material that did not crystallise during shearing was a 50/50 (w/w, %) blend regardless of the preparation method and it was physically stable at least two years in dry conditions. Shearing at varying temperatures was established to measure the physical stability of amorphous materials in processing and storage conditions. The actual physical stability of the blends was better than the pure amorphous materials at ambient temperature. Molecular mobility was not related to the physical stability of the amorphous blends, observed as crystallisation. Molecular mobility of the 50/50 blend derived from a spectral linewidth as a function of temperature using solid state NMR correlated better with the molecular mobility derived from a rheometer than that of differential scanning calorimetry data. Based on the results obtained, the effect of molecular interactions, thermodynamic driving force and miscibility of the blends are discussed as the key factors to stabilise the blends. The stickiness was found to be affected glass transition and viscosity. Ultrasound extrusion and cutting were successfully tested to increase the processability of sticky material. Furthermore, it was found to be possible to process the physically stable 50/50 blend in a supercooled liquid state instead of a glassy state. The method was not found to accelerate the crystallisation. This may open up new possibilities to process amorphous materials that are otherwise impossible to manufacture into solid dosage forms.
Resumo:
Water-ethanol mixtures are commonly used in industry and house holds. However, quite surprisingly their molecular-level structure is still not completely understood. In particular, there is evidence that the local intermolecular geometries depend significantly on the concentration. The aim of this study was to gain information on the molecular-level structures of water-ethanol mixtures by two computational methods. The methods are classical molecular dynamics (MD), where the movement of molecules can be studied, and x-ray Compton scattering, in which the scattering cross section is sensitive to the electron momentum density. Firstly, the water-ethanol mixtures were studied with MD simulations, with the mixture concentration ranging from 0 to 100%. For the simulations well-established force fields were used for the water and ethanol molecules (TIP4P and OPLS-AA, respectively). Moreover, two models were used for ethanol, rigid and non-rigid. In the rigid model the intramolecular bond lengths are fixed, whereas in the non-rigid model the lengths are determined by harmonic potentials. Secondly, mixtures with three different concentrations employing both ethanol models were studied by calculating the experimentally observable x-ray quantity, the Compton profile. In the MD simulations a slight underestimation in the density was observed as compared to experiment. Furthermore, a positive excess of hydrogen bonding with water molecules and a negative one with ethanol was quantified. Also, the mixture was found more structured when the ethanol concentration was higher. Negligible differences in the results were found between the two ethanol models. In contrast, in the Compton scattering results a notable difference between the ethanol models was observed. For the rigid model the Compton profiles were similar for all the concentrations, but for the non-rigid model they were distinct. This leads to two possibilities of how the mixing occurs. Either the mixing is similar in all concentrations (as suggested by the rigid model) or the mixing changes for different concentrations (as suggested by the non-rigid model). Either way, this study shows that the choice of the force field is essential in the microscopic structure formation in the MD simulations. When the sources of uncertainty in the calculated Compton profiles were analyzed, it was found that more statistics needs to be collected to reduce the statistical uncertainty in the final results. The obtained Compton scattering results can be considered somewhat preliminary, but clearly indicative of the behaviour of the water-ethanol mixtures when the force field is modified. The next step is to collect more statistics and compare the results with experimental data to decide which ethanol model describes the mixture better. This way, valuable information on the microscopic structure of water-ethanol mixtures can be found. In addition, information on the force fields in the MD simulations and on the ability of the MD simulations to reproduce the microscopic structure of binary liquids is obtained.
Resumo:
The present study investigated the potato starches and polyols which were used to prepare edible films. The amylose content and the gelatinization properties of various potato starches extracted from different potato cultivars were determined. The amylose content of potato starches varied between 11.9 and 20.1%. Onset temperatures of gelatinization of potato starches in excess water varied independently of the amylose content from 58 to 61°C determined using differential scanning calorimetry (DSC). The crystallinity of selected native starches with low, medium and high amylose content was determined by X-ray diffraction. The relative crystallinity was found to be around 10 13% in selected native potato starches containing 13 17% water. The glass transition temperature, crystallization melting behavior and relaxations of polyols, erythritol, sorbitol and xylitol, were determined using (DSC), dielectric analysis (DEA) and dynamic mechanical analysis (DMA). The glass transition temperatures of xylitol and sorbitol decreased as a result of water plasticization. Anhydrous amorphous erythritol crystallized rapidly. Edible films were obtained from solutions containing gelatinized starch, plasticizer (polyol or binary polyol mixture) and water by casting and evaporating water at 35°C. The present study investigated effects of plasticizer type and content on physical and mechanical properties of edible films stored at various relative water vapor pressures (RVP). The crystallinity of edible films with low, medium and high amylose content was determined by X-ray diffraction and they were found to be practically amorphous. Water sorption and water vapor permeability (WVP) of films was affected by the type and content of plasticizer. Water vapor permeability of films increased with increasing plasticizer content and storage RVP. Generally, Young's modulus and tensile strength decreased with increasing plasticizer and water content with a concurrent increase in elongation at break of films. High contents of xylitol and sorbitol resulted in changes in physical and mechanical properties of films probably due to phase separation and crystallization of xylitol and sorbitol which was not observed when binary polyol mixtures were used as plasticizers. The mechanical properties and the water vapor permeability (WVP) of the films were found to be independent of the amylose content.
Resumo:
Analyzing statistical dependencies is a fundamental problem in all empirical science. Dependencies help us understand causes and effects, create new scientific theories, and invent cures to problems. Nowadays, large amounts of data is available, but efficient computational tools for analyzing the data are missing. In this research, we develop efficient algorithms for a commonly occurring search problem - searching for the statistically most significant dependency rules in binary data. We consider dependency rules of the form X->A or X->not A, where X is a set of positive-valued attributes and A is a single attribute. Such rules describe which factors either increase or decrease the probability of the consequent A. A classical example are genetic and environmental factors, which can either cause or prevent a disease. The emphasis in this research is that the discovered dependencies should be genuine - i.e. they should also hold in future data. This is an important distinction from the traditional association rules, which - in spite of their name and a similar appearance to dependency rules - do not necessarily represent statistical dependencies at all or represent only spurious connections, which occur by chance. Therefore, the principal objective is to search for the rules with statistical significance measures. Another important objective is to search for only non-redundant rules, which express the real causes of dependence, without any occasional extra factors. The extra factors do not add any new information on the dependence, but can only blur it and make it less accurate in future data. The problem is computationally very demanding, because the number of all possible rules increases exponentially with the number of attributes. In addition, neither the statistical dependency nor the statistical significance are monotonic properties, which means that the traditional pruning techniques do not work. As a solution, we first derive the mathematical basis for pruning the search space with any well-behaving statistical significance measures. The mathematical theory is complemented by a new algorithmic invention, which enables an efficient search without any heuristic restrictions. The resulting algorithm can be used to search for both positive and negative dependencies with any commonly used statistical measures, like Fisher's exact test, the chi-squared measure, mutual information, and z scores. According to our experiments, the algorithm is well-scalable, especially with Fisher's exact test. It can easily handle even the densest data sets with 10000-20000 attributes. Still, the results are globally optimal, which is a remarkable improvement over the existing solutions. In practice, this means that the user does not have to worry whether the dependencies hold in future data or if the data still contains better, but undiscovered dependencies.
Resumo:
Nucleation is the first step of a first order phase transition. A new phase is always sprung up in nucleation phenomena. The two main categories of nucleation are homogeneous nucleation, where the new phase is formed in a uniform substance, and heterogeneous nucleation, when nucleation occurs on a pre-existing surface. In this thesis the main attention is paid on heterogeneous nucleation. This thesis wields the nucleation phenomena from two theoretical perspectives: the classical nucleation theory and the statistical mechanical approach. The formulation of the classical nucleation theory relies on equilibrium thermodynamics and use of macroscopically determined quantities to describe the properties of small nuclei, sometimes consisting of just a few molecules. The statistical mechanical approach is based on interactions between single molecules, and does not bear the same assumptions as the classical theory. This work gathers up the present theoretical knowledge of heterogeneous nucleation and utilizes it in computational model studies. A new exact molecular approach on heterogeneous nucleation was introduced and tested by Monte Carlo simulations. The results obtained from the molecular simulations were interpreted by means of the concepts of the classical nucleation theory. Numerical calculations were carried out for a variety of substances nucleating on different substances. The classical theory of heterogeneous nucleation was employed in calculations of one-component nucleation of water on newsprint paper, Teflon and cellulose film, and binary nucleation of water-n-propanol and water-sulphuric acid mixtures on silver nanoparticles. The results were compared with experimental results. The molecular simulation studies involved homogeneous nucleation of argon and heterogeneous nucleation of argon on a planar platinum surface. It was found out that the use of a microscopical contact angle as a fitting parameter in calculations based on the classical theory of heterogeneous nucleation leads to a fair agreement between the theoretical predictions and experimental results. In the presented cases the microscopical angle was found to be always smaller than the contact angle obtained from macroscopical measurements. Furthermore, molecular Monte Carlo simulations revealed that the concept of the geometrical contact parameter in heterogeneous nucleation calculations can work surprisingly well even for very small clusters.
Resumo:
This thesis studies binary time series models and their applications in empirical macroeconomics and finance. In addition to previously suggested models, new dynamic extensions are proposed to the static probit model commonly used in the previous literature. In particular, we are interested in probit models with an autoregressive model structure. In Chapter 2, the main objective is to compare the predictive performance of the static and dynamic probit models in forecasting the U.S. and German business cycle recession periods. Financial variables, such as interest rates and stock market returns, are used as predictive variables. The empirical results suggest that the recession periods are predictable and dynamic probit models, especially models with the autoregressive structure, outperform the static model. Chapter 3 proposes a Lagrange Multiplier (LM) test for the usefulness of the autoregressive structure of the probit model. The finite sample properties of the LM test are considered with simulation experiments. Results indicate that the two alternative LM test statistics have reasonable size and power in large samples. In small samples, a parametric bootstrap method is suggested to obtain approximately correct size. In Chapter 4, the predictive power of dynamic probit models in predicting the direction of stock market returns are examined. The novel idea is to use recession forecast (see Chapter 2) as a predictor of the stock return sign. The evidence suggests that the signs of the U.S. excess stock returns over the risk-free return are predictable both in and out of sample. The new "error correction" probit model yields the best forecasts and it also outperforms other predictive models, such as ARMAX models, in terms of statistical and economic goodness-of-fit measures. Chapter 5 generalizes the analysis of univariate models considered in Chapters 2 4 to the case of a bivariate model. A new bivariate autoregressive probit model is applied to predict the current state of the U.S. business cycle and growth rate cycle periods. Evidence of predictability of both cycle indicators is obtained and the bivariate model is found to outperform the univariate models in terms of predictive power.
Resumo:
Reorganizing a dataset so that its hidden structure can be observed is useful in any data analysis task. For example, detecting a regularity in a dataset helps us to interpret the data, compress the data, and explain the processes behind the data. We study datasets that come in the form of binary matrices (tables with 0s and 1s). Our goal is to develop automatic methods that bring out certain patterns by permuting the rows and columns. We concentrate on the following patterns in binary matrices: consecutive-ones (C1P), simultaneous consecutive-ones (SC1P), nestedness, k-nestedness, and bandedness. These patterns reflect specific types of interplay and variation between the rows and columns, such as continuity and hierarchies. Furthermore, their combinatorial properties are interlinked, which helps us to develop the theory of binary matrices and efficient algorithms. Indeed, we can detect all these patterns in a binary matrix efficiently, that is, in polynomial time in the size of the matrix. Since real-world datasets often contain noise and errors, we rarely witness perfect patterns. Therefore we also need to assess how far an input matrix is from a pattern: we count the number of flips (from 0s to 1s or vice versa) needed to bring out the perfect pattern in the matrix. Unfortunately, for most patterns it is an NP-complete problem to find the minimum distance to a matrix that has the perfect pattern, which means that the existence of a polynomial-time algorithm is unlikely. To find patterns in datasets with noise, we need methods that are noise-tolerant and work in practical time with large datasets. The theory of binary matrices gives rise to robust heuristics that have good performance with synthetic data and discover easily interpretable structures in real-world datasets: dialectical variation in the spoken Finnish language, division of European locations by the hierarchies found in mammal occurrences, and co-occuring groups in network data. In addition to determining the distance from a dataset to a pattern, we need to determine whether the pattern is significant or a mere occurrence of a random chance. To this end, we use significance testing: we deem a dataset significant if it appears exceptional when compared to datasets generated from a certain null hypothesis. After detecting a significant pattern in a dataset, it is up to domain experts to interpret the results in the terms of the application.