23 resultados para Pharmaceuticals formulations
Resumo:
This thesis discusses the use of sub- and supercritical fluids as the medium in extraction and chromatography. Super- and subcritical extraction was used to separate essential oils from herbal plant Angelica archangelica. The effect of extraction parameters was studied and sensory analyses of the extracts were done by an expert panel. The results of the sensory analyses were compared to the analytically determined contents of the extracts. Sub- and supercritical fluid chromatography (SFC) was used to separate and purify high-value pharmaceuticals. Chiral SFC was used to separate the enantiomers of racemic mixtures of pharmaceutical compounds. Very low (cryogenic) temperatures were applied to substantially enhance the separation efficiency of chiral SFC. The thermodynamic aspects affecting the resolving ability of chiral stationary phases are briefly reviewed. The process production rate which is a key factor in industrial chromatography was optimized by empirical multivariate methods. General linear model was used to optimize the separation of omega-3 fatty acid ethyl esters from esterized fish oil by using reversed-phase SFC. Chiral separation of racemic mixtures of guaifenesin and ferulic acid dimer ethyl ester was optimized by using response surface method with three variables per time. It was found that by optimizing four variables (temperature, load, flowate and modifier content) the production rate of the chiral resolution of racemic guaifenesin by cryogenic SFC could be increased severalfold compared to published results of similar application. A novel pressure-compensated design of industrial high pressure chromatographic column was introduced, using the technology developed in building the deep-sea submersibles (Mir 1 and 2). A demonstration SFC plant was built and the immunosuppressant drug cyclosporine A was purified to meet the requirements of US Pharmacopoeia. A smaller semi-pilot size column with similar design was used for cryogenic chiral separation of aromatase inhibitor Finrozole for use in its development phase 2.
Resumo:
Effective processing of powdered particles can facilitate powder handling and result in better drug product performance, which is of great importance in the pharmaceutical industry where the majority of active pharmaceutical ingredients (APIs) are delivered as solid dosage forms. The purpose of this work was to develop a new ultrasound-assisted method for particle surface modification and thin-coating of pharmaceutical powders. The ultrasound was used to produce an aqueous mist with or without a coating agent. By using the proposed technique, it was possible to decrease the interparticular interactions and improve rheological properties of poorly-flowing water-soluble powders by aqueous smoothing of the rough surfaces of irregular particles. In turn, hydrophilic polymer thin-coating of a hydrophobic substance diminished the triboelectrostatic charge transfer and improved the flowability of highly cohesive powder. To determine the coating efficiency of the technique, the bioactive molecule β-galactosidase was layered onto the surface of powdered lactose particles. Enzyme-treated materials were analysed by assaying the quantity of the reaction product generated during enzymatic cleavage of the milk sugar. A near-linear increase in the thickness of the drug layer was obtained during progressive treatment. Using the enzyme coating procedure, it was confirmed that the ultrasound-assisted technique is suitable for processing labile protein materials. In addition, this pre-treatment of milk sugar could be used to improve utilization of lactose-containing formulations for populations suffering from severe lactose intolerance. Furthermore, the applicability of the thin-coating technique for improving homogeneity of low-dose solid dosage forms was shown. The carrier particles coated with API gave rise to uniform distribution of the drug within the powder. The mixture remained homogeneous during further tabletting, whereas the reference physical powder mixture was subject to segregation. In conclusion, ultrasound-assisted surface engineering of pharmaceutical powders can be effective technology for improving formulation and performance of solid dosage forms such as dry powder inhalers (DPI) and direct compression products.
Resumo:
Miniaturized analytical devices, such as heated nebulizer (HN) microchips studied in this work, are of increasing interest owing to benefits like faster operation, better performance, and lower cost relative to conventional systems. HN microchips are microfabricated devices that vaporize liquid and mix it with gas. They are used with low liquid flow rates, typically a few µL/min, and have previously been utilized as ion sources for mass spectrometry (MS). Conventional ion sources are seldom feasible at such low flow rates. In this work HN chips were developed further and new applications were introduced. First, a new method for thermal and fluidic characterization of the HN microchips was developed and used to study the chips. Thermal behavior of the chips was also studied by temperature measurements and infrared imaging. An HN chip was applied to the analysis of crude oil – an extremely complex sample – by microchip atmospheric pressure photoionization (APPI) high resolution mass spectrometry. With the chip, the sample flow rate could be reduced significantly without loss of performance and with greatly reduced contamination of the MS instrument. Thanks to its suitability to high temperature, microchip APPI provided efficient vaporization of nonvolatile compounds in crude oil. The first microchip version of sonic spray ionization (SSI) was presented. Ionization was achieved by applying only high (sonic) speed nebulizer gas to an HN microchip. SSI significantly broadens the range of analytes ionizable with the HN chips, from small stable molecules to labile biomolecules. The analytical performance of the microchip SSI source was confirmed to be acceptable. The HN microchips were also used to connect gas chromatography (GC) and capillary liquid chromatography (LC) to MS, using APPI for ionization. Microchip APPI allows efficient ionization of both polar and nonpolar compounds whereas with the most popular electrospray ionization (ESI) only polar and ionic molecules are ionized efficiently. The combination of GC with MS showed that, with HN microchips, GCs can easily be used with MS instruments designed for LC-MS. The presented analytical methods showed good performance. The first integrated LC–HN microchip was developed and presented. In a single microdevice, there were structures for a packed LC column and a heated nebulizer. Nonpolar and polar analytes were efficiently ionized by APPI. Ionization of nonpolar and polar analytes is not possible with previously presented chips for LC–MS since they rely on ESI. Preliminary quantitative performance of the new chip was evaluated and the chip was also demonstrated with optical detection. A new ambient ionization technique for mass spectrometry, desorption atmospheric pressure photoionization (DAPPI), was presented. The DAPPI technique is based on an HN microchip providing desorption of analytes from a surface. Photons from a photoionization lamp ionize the analytes via gas-phase chemical reactions, and the ions are directed into an MS. Rapid analysis of pharmaceuticals from tablets was successfully demonstrated as an application of DAPPI.
Resumo:
Solid materials can exist in different physical structures without a change in chemical composition. This phenomenon, known as polymorphism, has several implications on pharmaceutical development and manufacturing. Various solid forms of a drug can possess different physical and chemical properties, which may affect processing characteristics and stability, as well as the performance of a drug in the human body. Therefore, knowledge and control of the solid forms is fundamental to maintain safety and high quality of pharmaceuticals. During manufacture, harsh conditions can give rise to unexpected solid phase transformations and therefore change the behavior of the drug. Traditionally, pharmaceutical production has relied on time-consuming off-line analysis of production batches and finished products. This has led to poor understanding of processes and drug products. Therefore, new powerful methods that enable real time monitoring of pharmaceuticals during manufacturing processes are greatly needed. The aim of this thesis was to apply spectroscopic techniques to solid phase analysis within different stages of drug development and manufacturing, and thus, provide a molecular level insight into the behavior of active pharmaceutical ingredients (APIs) during processing. Applications to polymorph screening and different unit operations were developed and studied. A new approach to dissolution testing, which involves simultaneous measurement of drug concentration in the dissolution medium and in-situ solid phase analysis of the dissolving sample, was introduced and studied. Solid phase analysis was successfully performed during different stages, enabling a molecular level insight into the occurring phenomena. Near-infrared (NIR) spectroscopy was utilized in screening of polymorphs and processing-induced transformations (PITs). Polymorph screening was also studied with NIR and Raman spectroscopy in tandem. Quantitative solid phase analysis during fluidized bed drying was performed with in-line NIR and Raman spectroscopy and partial least squares (PLS) regression, and different dehydration mechanisms were studied using in-situ spectroscopy and partial least squares discriminant analysis (PLS-DA). In-situ solid phase analysis with Raman spectroscopy during dissolution testing enabled analysis of dissolution as a whole, and provided a scientific explanation for changes in the dissolution rate. It was concluded that the methods applied and studied provide better process understanding and knowledge of the drug products, and therefore, a way to achieve better quality.
Resumo:
Increasing attention has been focused on methods that deliver pharmacologically active compounds (e.g. drugs, peptides and proteins) in a controlled fashion, so that constant, sustained, site-specific or pulsatile action can be attained. Ion-exchange resins have been widely studied in medical and pharmaceutical applications, including controlled drug delivery, leading to commercialisation of some resin based formulations. Ion-exchangers provide an efficient means to adjust and control drug delivery, as the electrostatic interactions enable precise control of the ion-exchange process and, thus, a more uniform and accurate control of drug release compared to systems that are based only on physical interactions. Unlike the resins, only few studies have been reported on ion-exchange fibers in drug delivery. However, the ion-exchange fibers have many advantageous properties compared to the conventional ion-exchange resins, such as more efficient compound loading into and release from the ion-exchanger, easier incorporation of drug-sized compounds, enhanced control of the ion-exchange process, better mechanical, chemical and thermal stability, and good formulation properties, which make the fibers attractive materials for controlled drug delivery systems. In this study, the factors affecting the nature and strength of the binding/loading of drug-sized model compounds into the ion-exchange fibers was evaluated comprehensively and, moreover, the controllability of subsequent drug release/delivery from the fibers was assessed by modifying the conditions of external solutions. Also the feasibility of ion-exchange fibers for simultaneous delivery of two drugs in combination was studied by dual loading. Donnan theory and theoretical modelling were applied to gain mechanistic understanding on these factors. The experimental results imply that incorporation of model compounds into the ion-exchange fibers was attained mainly as a result of ionic bonding, with additional contribution of non-specific interactions. Increasing the ion-exchange capacity of the fiber or decreasing the valence of loaded compounds increased the molar loading, while more efficient release of the compounds was observed consistently at conditions where the valence or concentration of the extracting counter-ion was increased. Donnan theory was capable of fully interpreting the ion-exchange equilibria and the theoretical modelling supported precisely the experimental observations. The physico-chemical characteristics (lipophilicity, hydrogen bonding ability) of the model compounds and the framework of the fibrous ion-exchanger influenced the affinity of the drugs towards the fibers and may, thus, affect both drug loading and release. It was concluded that precisely controlled drug delivery may be tailored for each compound, in particularly, by choosing a suitable ion-exchange fiber and optimizing the delivery system to take into account the external conditions, also when delivering two drugs simultaneously.
Resumo:
The study is a philosophical analysis of Israel Scheffler’s philosophy of education, focusing on three crucial conceptions in his philosophy: the conception of rationality, the conception of human nature, and the conception of reality. The interrelations of these three concepts as well as their relations to educational theorizing are analysed and elaborated. A conceptual problem concerning Scheffler’s ideal of rationality derives from Scheffler’s supposition of the strong analogy between science education and moral education in terms of the ideal of rationality. This analogy is argued to be conceptually problematic, since the interconnections of rationality, objectivity, and truth, appear to differ from each other in the realms of ethics and science, given the presuppositions of ontological realism and ethical naturalism, to which Scheffler explicitly subscribes. This study considers two philosophical alternatives for solving this problem. The first alternative relates the analogy to the normative concept of personhood deriving from the teleological understanding of human nature. Nevertheless, this position turns out to be problematic for Scheffler, since he rejects all teleological thinking in his philosophy. The problem can be solved, as it is argued, by limiting Scheffler’s rejection of teleology – in light of his philosophical outlook on the whole – in a manner that allows a modest version of a teleological conception of human nature. The second alternative, based especially on Scheffler’s later contributions, is to suggest that reality is actually more complex and manifold than it appears to be in light of a contemporary naturalist worldview. This idea of plurealism – Scheffler’s synthesis of pluralism and realism – is represented especially in Scheffler’s contributions related to his debate with Nelson Goodman dealing with both constructivism and realism. The idea of plurealism is not only related to the ethics-science-distinction, but is more widely related to the relationship between ontological realism and the incommensurable systems of description in diverse realms of human understanding. The Scheffler-Goodman debate is also analysed in relation to the contemporary constructivism-realism debate in educational philosophy. In terms of educational questions, Scheffler’s plurealism is argued as offering a fruitful perspective. Scheffler’s philosophy of education can be interpreted as searching for solutions to the problems deriving from the tension between the tradition of analytical philosophy and the complexity and multiplicity of educational reality. The complexity of reality combined with the supposition of the limitedness of human knowledge does not lead Scheffler to relativism or particularism, but, in contrast, Schefflerian formulations of rationality and objectivity preserve the possibility for critical inquiry in all realms of educational reality. In light of this study, Scheffler’s philosophy of education provides an exceptional example of combining ontological realism, epistemological fallibilism, and the defence of the ideal of rationality, combined with a wide-ranging understanding of educational reality.
Resumo:
Our present-day understanding of fundamental constituents of matter and their interactions is based on the Standard Model of particle physics, which relies on quantum gauge field theories. On the other hand, the large scale dynamical behaviour of spacetime is understood via the general theory of relativity of Einstein. The merging of these two complementary aspects of nature, quantum and gravity, is one of the greatest goals of modern fundamental physics, the achievement of which would help us understand the short-distance structure of spacetime, thus shedding light on the events in the singular states of general relativity, such as black holes and the Big Bang, where our current models of nature break down. The formulation of quantum field theories in noncommutative spacetime is an attempt to realize the idea of nonlocality at short distances, which our present understanding of these different aspects of Nature suggests, and consequently to find testable hints of the underlying quantum behaviour of spacetime. The formulation of noncommutative theories encounters various unprecedented problems, which derive from their peculiar inherent nonlocality. Arguably the most serious of these is the so-called UV/IR mixing, which makes the derivation of observable predictions especially hard by causing new tedious divergencies, to which our previous well-developed renormalization methods for quantum field theories do not apply. In the thesis I review the basic mathematical concepts of noncommutative spacetime, different formulations of quantum field theories in the context, and the theoretical understanding of UV/IR mixing. In particular, I put forward new results to be published, which show that also the theory of quantum electrodynamics in noncommutative spacetime defined via Seiberg-Witten map suffers from UV/IR mixing. Finally, I review some of the most promising ways to overcome the problem. The final solution remains a challenge for the future.
Resumo:
This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.
Resumo:
A national church, freedom of religion, and the state The interpretation of freedom of religion formulated by the Synod of the Evangelical Lutheran Church of Finland in reference to the relationship between the Church and the state from 1963 to 2003 This paper discusses the interpretation of freedom of religion formulated by the Synod of the Evangelical Lutheran Church of Finland during the years 1963-2003. The effect of these formulations and decisions made by the Synod on the relationship between the Church and the state is also discussed as the relationship has been a central issue in the debate about freedom of religion in Finland. Active co-operation with the state caused a dispute in the Church during this period. Another cause for concern for the Synod, a strong defender of the national church, was the weakening position of the Church in a society undergoing many changes. As the Synod of 1963 discussed the status of the Church, the Church began to reflect upon its identity as a national church, and to evaluate freedom of religion in the country, as well as the relationship between the Church and the state. Some of the radicals of the 1960s and 1970s presented the Church as an obstacle to freedom of religion. The Synod was keen to emphasize that, in accordance with international agreements on human rights, freedom of religion means the freedom to have and follow a religion, and also that freedom of religion was a right of the majority in Finnish society. As an active guardian of the rights of its members, the Synod defended such issues as the teaching of religion in schools. Throughout the dispute, the Church focused on its right to act freely and, according to its identity, to express spirituality in the society. At the end of the 1960s, several efforts to reform the law on the freedom of religion and the relationship between the Church and the state gained favour in the Synod. These formulations of the Church were the basis for the work of a parliamentary committee in the 1970s, but no significant changes resulted. Instead, freedom of religion in Finland was judged to be fairly good. The committee paper did, however, lead to preparations for greater independence of the Church. The Synod at the time chose to react to the changes presented to it, but it was not before the 1990s that the Synod became an active force of reform in these matters. Though the Synod, particularly from the 1970s onwards, began clearly to favour the improvement of the position of other religious communities in Finland, it felt it had reason to be cautious as each church and religious community had the freedom to decide individually its relationship with the state. Any changes that would have weakened the position of the Church in Finnish society were met with disapproval in the Synod. Even though some theological concerns regarding the national identity of the Church were raised, the Synod emphasized issues of church policy. Keen to preserve and protect its legal status in society, the Synod judged that this status supported the freedom of action enjoyed by the Church as well as the freedom of religion.
Resumo:
A national church, freedom of religion, and the state The interpretation of freedom of religion formulated by the Synod of the Evangelical Lutheran Church of Finland in reference to the relationship between the Church and the state from 1963 to 2003 This paper discusses the interpretation of freedom of religion formulated by the Synod of the Evangelical Lutheran Church of Finland during the years 1963-2003. The effect of these formulations and decisions made by the Synod on the relationship between the Church and the state is also discussed as the relationship has been a central issue in the debate about freedom of religion in Finland. Active co-operation with the state caused a dispute in the Church during this period. Another cause for concern for the Synod, a strong defender of the national church, was the weakening position of the Church in a society undergoing many changes. As the Synod of 1963 discussed the status of the Church, the Church began to reflect upon its identity as a national church, and to evaluate freedom of religion in the country, as well as the relationship between the Church and the state. Some of the radicals of the 1960s and 1970s presented the Church as an obstacle to freedom of religion. The Synod was keen to emphasize that, in accordance with international agreements on human rights, freedom of religion means the freedom to have and follow a religion, and also that freedom of religion was a right of the majority in Finnish society. As an active guardian of the rights of its members, the Synod defended such issues as the teaching of religion in schools. Throughout the dispute, the Church focused on its right to act freely and, according to its identity, to express spirituality in the society. At the end of the 1960s, several efforts to reform the law on the freedom of religion and the relationship between the Church and the state gained favour in the Synod. These formulations of the Church were the basis for the work of a parliamentary committee in the 1970s, but no significant changes resulted. Instead, freedom of religion in Finland was judged to be fairly good. The committee paper did, however, lead to preparations for greater independence of the Church. The Synod at the time chose to react to the changes presented to it, but it was not before the 1990s that the Synod became an active force of reform in these matters. Though the Synod, particularly from the 1970s onwards, began clearly to favour the improvement of the position of other religious communities in Finland, it felt it had reason to be cautious as each church and religious community had the freedom to decide individually its relationship with the state. Any changes that would have weakened the position of the Church in Finnish society were met with disapproval in the Synod. Even though some theological concerns regarding the national identity of the Church were raised, the Synod emphasized issues of church policy. Keen to preserve and protect its legal status in society, the Synod judged that this status supported the freedom of action enjoyed by the Church as well as the freedom of religion.
Resumo:
The significance of carbohydrate-protein interactions in many biological phenomena is now widely acknowledged and carbohydrate based pharmaceuticals are under intensive development. The interactions between monomeric carbohydrate ligands and their receptors are usually of low affinity. To overcome this limitation natural carbohydrate ligands are often organized as multivalent structures. Therefore, artificial carbohydrate pharmaceuticals should be constructed on the same concept, as multivalent carbohydrates or glycoclusters. Infections of specific host tissues by bacteria, viruses, and fungi are among the unfavorable disease processes for which suitably designed carbohydrate inhibitors represent worthy targets. The bacterium Helicobacter pylori colonizes more than half of all people worldwide, causing gastritis, gastric ulcer, and conferring a greater risk of stomach cancer. The present medication therapy for H. pylori includes the use of antibiotics, which is associated with increasing incidence of bacterial resistance to traditional antibiotics. Therefore, the need for an alternative treatment method is urgent. In this study, four novel synthesis procedures of multivalent glycoconjugates were created. Three different scaffolds representing linear (chondroitin oligomer), cyclic (γ-cyclodextrin), and globular (dendrimer) molecules were used. Multivalent conjugates were produced using the human milk type oligosaccharides LNDFH I (Lewis-b hexasaccharide), LNnT (Galβ1-4GlcNAcβ1-3Galβ1-4Glc), and GlcNAcβ1-3Galβ1-4GlcNAcβ1-3Galβ1-4Glc all representing analogues of the tissue binding epitopes for H. pylori. The first synthetic method included the reductive amination of scaffold molecules modified to express primary amine groups, and in the case of dendrimer direct amination to scaffold molecule presenting 64 primary amine groups. The second method described a direct procedure for amidation of glycosylamine modified oligosaccharides to scaffold molecules presenting carboxyl groups. The final two methods that were created both included an oxime-linkage on linkers of different length. All the new synthetic procedures synthesized had the advantage of using unmodified reducing sugars as starting material making it easy to synthesize glycoconjugates of different specificity. In addition, the binding activity of an array of neoglycolipids to H. pylori was studied. Consequently, two new neolacto-based structures, Glcβ1-3Galβ1-4GlcNAcβ1-3Galβ1-4Glcβ1-Cer and GlcAβ1-3Galβ1-4GlcNAcβ1-3Galβ1-4Glcβ1-Cer, with binding activity toward H. pylori were discovered. Interestingly, N-methyl and N-ethyl amide modification of the GlcAβ1-3Galβ1-4GlcNAcβ1-3Galβ1-4Glcβ1-Cer glucuronic acid residue resulted in more effective H. pylori binding epitopes than the parent molecule.
Resumo:
Numerical models, used for atmospheric research, weather prediction and climate simulation, describe the state of the atmosphere over the heterogeneous surface of the Earth. Several fundamental properties of atmospheric models depend on orography, i.e. on the average elevation of land over a model area. The higher is the models' resolution, the more the details of orography directly influence the simulated atmospheric processes. This sets new requirements for the accuracy of the model formulations with respect to the spatially varying orography. Orography is always averaged, representing the surface elevation within the horizontal resolution of the model. In order to remove the smallest scales and steepest slopes, the continuous spectrum of orography is normally filtered (truncated) even more, typically beyond a few gridlengths of the model. This means, that in the numerical weather prediction (NWP) models, there will always be subgridscale orography effects, which cannot be explicitly resolved by numerical integration of the basic equations, but require parametrization. In the subgrid-scale, different physical processes contribute in different scales. The parametrized processes interact with the resolved-scale processes and with each other. This study contributes to building of a consistent, scale-dependent system of orography-related parametrizations for the High Resolution Limited Area Model (HIRLAM). The system comprises schemes for handling the effects of mesoscale (MSO) and small-scale (SSO) orographic effects on the simulated flow and a scheme of orographic effects on the surface-level radiation fluxes. Representation of orography, scale-dependencies of the simulated processes and interactions between the parametrized and resolved processes are discussed. From the high-resolution digital elevation data, orographic parameters are derived for both momentum and radiation flux parametrizations. Tools for diagnostics and validation are developed and presented. The parametrization schemes applied, developed and validated in this study, are currently being implemented into the reference version of HIRLAM.
Resumo:
This study examines the properties of Generalised Regression (GREG) estimators for domain class frequencies and proportions. The family of GREG estimators forms the class of design-based model-assisted estimators. All GREG estimators utilise auxiliary information via modelling. The classic GREG estimator with a linear fixed effects assisting model (GREG-lin) is one example. But when estimating class frequencies, the study variable is binary or polytomous. Therefore logistic-type assisting models (e.g. logistic or probit model) should be preferred over the linear one. However, other GREG estimators than GREG-lin are rarely used, and knowledge about their properties is limited. This study examines the properties of L-GREG estimators, which are GREG estimators with fixed-effects logistic-type models. Three research questions are addressed. First, I study whether and when L-GREG estimators are more accurate than GREG-lin. Theoretical results and Monte Carlo experiments which cover both equal and unequal probability sampling designs and a wide variety of model formulations show that in standard situations, the difference between L-GREG and GREG-lin is small. But in the case of a strong assisting model, two interesting situations arise: if the domain sample size is reasonably large, L-GREG is more accurate than GREG-lin, and if the domain sample size is very small, estimation of assisting model parameters may be inaccurate, resulting in bias for L-GREG. Second, I study variance estimation for the L-GREG estimators. The standard variance estimator (S) for all GREG estimators resembles the Sen-Yates-Grundy variance estimator, but it is a double sum of prediction errors, not of the observed values of the study variable. Monte Carlo experiments show that S underestimates the variance of L-GREG especially if the domain sample size is minor, or if the assisting model is strong. Third, since the standard variance estimator S often fails for the L-GREG estimators, I propose a new augmented variance estimator (A). The difference between S and the new estimator A is that the latter takes into account the difference between the sample fit model and the census fit model. In Monte Carlo experiments, the new estimator A outperformed the standard estimator S in terms of bias, root mean square error and coverage rate. Thus the new estimator provides a good alternative to the standard estimator.
Resumo:
The core aim of machine learning is to make a computer program learn from the experience. Learning from data is usually defined as a task of learning regularities or patterns in data in order to extract useful information, or to learn the underlying concept. An important sub-field of machine learning is called multi-view learning where the task is to learn from multiple data sets or views describing the same underlying concept. A typical example of such scenario would be to study a biological concept using several biological measurements like gene expression, protein expression and metabolic profiles, or to classify web pages based on their content and the contents of their hyperlinks. In this thesis, novel problem formulations and methods for multi-view learning are presented. The contributions include a linear data fusion approach during exploratory data analysis, a new measure to evaluate different kinds of representations for textual data, and an extension of multi-view learning for novel scenarios where the correspondence of samples in the different views or data sets is not known in advance. In order to infer the one-to-one correspondence of samples between two views, a novel concept of multi-view matching is proposed. The matching algorithm is completely data-driven and is demonstrated in several applications such as matching of metabolites between humans and mice, and matching of sentences between documents in two languages.