883 resultados para Complex polymerization method


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Catalysts containing mixtures of NiO, MgO and ZrO2 were synthesized by the polymerization method. They were characterized by X-ray diffraction (XRD), physisorption of N-2 (BET), X-ray photoelectron spectroscopy (XPS) and X-ray absorption near-edge structure (XANES), and then tested in the partial oxidation of methane (POM) in the presence of air (2CH(4):1O(2)) at 750 degrees C for 6 h. Among the ternary oxides, the catalyst with 40 mol% MgO showed the highest conversion rates in the catalytic processes, but also the highest carbon deposition values (48 mmol h (1)). The greater the amount of NiO-MgO solid solution formed, the higher was the conversion rate of reactants (CH4), peaking at 40 mol% of MgO. Catalysts with lower Ni content on the surface achieved a high rate of CH4 conversion into synthesis gas (H-2 + CO). The formation of more NiO-MgO solid solution seemed to inhibit the deactivation of Ni degrees during reaction. The values of the H-2/CO product ratio were generally found to be slightly lower than stoichiometric. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nanocomposites obtained from the polymerization of aniline in the presence of nanoparticles of magnetite (Fe3O4) have been investigated in previous studies. However, there is a lack of information available on the redox interaction of the nanoparticle/conductive polymer couple and the stability that such an oxide can give to the organic phase. In this work, Fe3O4 nanoparticles were incorporated into a PANi matrix by the in-situ oxidative polymerization method. A combination of X-ray diffraction, Mossbauer spectroscopy, transmission electronic microscopy, UV-visible spectroscopy as well as the cyclic voltammetric and Raman spectroscopy techniques, was used to understand the redox effect that the partially oxidized nanoparticles produced on the polymer. It was found that magnetite greatly stabilised PANi, mainly by enhancing the Leucoemeraldine/Emeraldine redox couple and also by reducing the bipolaronic state. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article studied the applicability of poly(acrylamide) and methylcellulose (PAAm-MC) hydrogels as potential delivery vehicle for the controlled-extended release of ammonium sulfate (NH(4))(2)SO(4) and potassium phosphate (KH(2)PO(4)) fertilizers. PAAm-MC hydrogels with different acrylamide (AAm) and MC concentrations were prepared by a free radical polymerization method. The adsorption and desorption kinetics of fertilizers were determined using conductivity measurements based on previously built analytical curve. The addition of MC in the PAAm chains increased the quantities of (NH(4))(2)SO(4) and KH(2)PO(4) loaded and extended the time and quantities of fertilizers released. Coherently, both loading and releasing processes were strongly influenced by hydrophilic properties of hydrogels (AAm/MC mass proportion). The best sorption (124.0 mg KH(2)PO(4)/g hydrogel and 58.0 mg (NH(4))(2)SO(4)/g hydrogel) and desorption (54.9 mg KH(2)PO(4)/g hydrogel and 49.5 mg (NH(4))(2)SO(4)/g hydrogel) properties were observed for 6.0% AAm-1.0% MC hydrogels (AAm/MC mass proportion equal 6), indicating that these hydrogels are potentially viable to be used in controlled-extended release of fertilizers systems. (C) 2011 Wiley Periodicals, Inc. J Appl Polym Sci 123: 2291-2298, 2012

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In dieser Arbeit wurden Kolloide aus flüssigkristallinen Polymeren dargestellt und untersucht.rnrnDie Methode der Dispersionspolymerisation zur Darstellung von Kolloiden aus flüssigkristallinen Polyacrylaten wurde in unpolare Lösungsmittel adaptiert, umrneine Manipulierbarkeit anisotroper Kolloide durch elektrische Felder zu erreichen.rnDazu wurden ein Gemisch aus THF und Siliconöl als Reaktionsmischung gewähltrnund polysiloxanbasierte Polymere und Copolymere als Stabilisatoren eingesetzt.rnDabei auftretende unerwartete Auswirkungen auf die Mesogenkonfiguration führtenrnzu einer Untersuchung der Abhängigkeit der Mesogenkonfigurationen von der Oberflächenverankerung der Mesogene. Schließlich wurde eine Kontrolle derrnOberfl¨achenverankerung der Mesogene und somit eine Kontrolle der Mesogenkonfigurationen unter Ausnutzung der Eigenschaften flüssigkristallin/nicht flüssigkristalliner Blockcopolymere erreicht. Zu diesem Zweck wurde auch ein neuer Makroinitiator entwickelt. Kleine Kolloide konnten mittels eines elektrischen Feldes gedreht bzw. zu Linien angeordnet werden.rnrnEinige neue Polysiloxane wurden zum Einbau in flüssigkristalline Kolloide viarnMiniemulsion synthetisiert. Sie wurden charakterisiert und in Kolloide überführt. Aufgrund zu hoher Übergangstemperaturen konnten bei den meisten jedoch keine Strukturen aus phasenseparierten Polysiloxane gefunden werden. Die Ausbildung der Strukturen in solchen Kolloiden konnte aber trotzdem verstanden werden.rnrnAus vernetzten Hauptkettenpolymeren sollten aktuierende Kolloide hergestelltrnwerden. Dazu wurde das entsprechende Hauptkettenpolymer hergestellt, charakterisiert und per Miniemulsion in Kolloide überführt. Die dargestellten Kolloide wurden unter dem TEM geheizt und zeigten Formänderungen, die jedoch nicht kontrolliert und noch irreversibel waren.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The soluble and stable fibrin monomer-fibrinogen complex (SF) is well known to be present in the circulating blood of healthy individuals and of patients with thrombotic diseases. However, its physiological role is not yet fully understood. To deepen our knowledge about this complex, a method for the quantitative analysis of interaction between soluble fibrin monomers and surface-immobilized fibrinogen has been established by means of resonant mirror (IAsys) and surface plasmon resonance (BIAcore) biosensors. The protocols have been optimized and validated by choosing appropriate immobilization procedures with regeneration steps and suitable fibrin concentrations. The highly specific binding of fibrin monomers to immobilized fibrin(ogen), or vice versa, was characterized by an affinity constant of approximately 10(-8)M, which accords better with the direct dissociation of fibrin triads (KD approximately 10(-8) -10(-9) M) (J. R. Shainoff and B. N. Dardik, Annals of the New York Academy of Science, 1983, Vol. 27, pp. 254-268) than with earlier estimations of the KD for the fibrin-fibrinogen complex (KD approximately 10(-6) M) (J. L. Usero, C. Izquierdo, F. J. Burguillo, M. G. Roig, A. del Arco, and M. A. Herraez, International Journal of Biochemistry, 1981, Vol. 13, pp. 1191-1196).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Synthetic oligonucleotides and peptides have found wide applications in industry and academic research labs. There are ~60 peptide drugs on the market and over 500 under development. The global annual sale of peptide drugs in 2010 was estimated to be $13 billion. There are three oligonucleotide-based drugs on market; among them, the FDA newly approved Kynamro was predicted to have a $100 million annual sale. The annual sale of oligonucleotides to academic labs was estimated to be $700 million. Both bio-oligomers are mostly synthesized on automated synthesizers using solid phase synthesis technology, in which nucleoside or amino acid monomers are added sequentially until the desired full-length sequence is reached. The additions cannot be complete, which generates truncated undesired failure sequences. For almost all applications, these impurities must be removed. The most widely used method is HPLC. However, the method is slow, expensive, labor-intensive, not amendable for automation, difficult to scale up, and unsuitable for high throughput purification. It needs large capital investment, and consumes large volumes of harmful solvents. The purification costs are estimated to be more than 50% of total production costs. Other methods for bio-oligomer purification also have drawbacks, and are less favored than HPLC for most applications. To overcome the problems of known biopolymer purification technologies, we have developed two non-chromatographic purification methods. They are (1) catching failure sequences by polymerization, and (2) catching full-length sequences by polymerization. In the first method, a polymerizable group is attached to the failure sequences of the bio-oligomers during automated synthesis; purification is achieved by simply polymerizing the failure sequences into an insoluble gel and extracting full-length sequences. In the second method, a polymerizable group is attached to the full-length sequences, which are then incorporated into a polymer; impurities are removed by washing, and pure product is cleaved from polymer. These methods do not need chromatography, and all drawbacks of HPLC no longer exist. Using them, purification is achieved by simple manipulations such as shaking and extraction. Therefore, they are suitable for large scale purification of oligonucleotide and peptide drugs, and also ideal for high throughput purification, which currently has a high demand for research projects involving total gene synthesis. The dissertation will present the details about the development of the techniques. Chapter 1 will make an introduction to oligodeoxynucleotides (ODNs), their synthesis and purification. Chapter 2 will describe the detailed studies of using the catching failure sequences by polymerization method to purify ODNs. Chapter 3 will describe the further optimization of the catching failure sequences by polymerization ODN purification technology to the level of practical use. Chapter 4 will present using the catching full-length sequence by polymerization method for ODN purification using acid-cleavable linker. Chapter 5 will make an introduction to peptides, their synthesis and purification. Chapter 6 will describe the studies using the catching full-length sequence by polymerization method for peptide purification.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Los sistemas transaccionales tales como los programas informáticos para la planificación de recursos empresariales (ERP software) se han implementado ampliamente mientras que los sistemas analíticos para la gestión de la cadena de suministro (SCM software) no han tenido el éxito deseado por la industria de tecnología de información (TI). Aunque se documentan beneficios importantes derivados de las implantaciones de SCM software, las empresas industriales son reacias a invertir en este tipo de sistemas. Por una parte esto es debido a la falta de métodos que son capaces de detectar los beneficios por emplear esos sistemas, y por otra parte porque el coste asociado no está identificado, detallado y cuantificado suficientemente. Los esquemas de coordinación basados únicamente en sistemas ERP son alternativas válidas en la práctica industrial siempre que la relación coste-beneficio esta favorable. Por lo tanto, la evaluación de formas organizativas teniendo en cuenta explícitamente el coste debido a procesos administrativos, en particular por ciclos iterativos, es de gran interés para la toma de decisiones en el ámbito de inversiones en TI. Con el fin de cerrar la brecha, el propósito de esta investigación es proporcionar métodos de evaluación que permitan la comparación de diferentes formas de organización y niveles de soporte por sistemas informáticos. La tesis proporciona una amplia introducción, analizando los retos a los que se enfrenta la industria. Concluye con las necesidades de la industria de SCM software: unas herramientas que facilitan la evaluación integral de diferentes propuestas de organización. A continuación, la terminología clave se detalla centrándose en la teoría de la organización, las peculiaridades de inversión en TI y la tipología de software de gestión de la cadena de suministro. La revisión de la literatura clasifica las contribuciones recientes sobre la gestión de la cadena de suministro, tratando ambos conceptos, el diseño de la organización y su soporte por las TI. La clasificación incluye criterios relacionados con la metodología de la investigación y su contenido. Los estudios empíricos en el ámbito de la administración de empresas se centran en tipologías de redes industriales. Nuevos algoritmos de planificación y esquemas de coordinación innovadoras se desarrollan principalmente en el campo de la investigación de operaciones con el fin de proponer nuevas funciones de software. Artículos procedentes del área de la gestión de la producción se centran en el análisis de coste y beneficio de las implantaciones de sistemas. La revisión de la literatura revela que el éxito de las TI para la coordinación de redes industriales depende en gran medida de características de tres dimensiones: la configuración de la red industrial, los esquemas de coordinación y las funcionalidades del software. La literatura disponible está enfocada sobre todo en los beneficios de las implantaciones de SCM software. Sin embargo, la coordinación de la cadena de suministro, basándose en el sistema ERP, sigue siendo la práctica industrial generalizada, pero el coste de coordinación asociado no ha sido abordado por los investigadores. Los fundamentos de diseño organizativo eficiente se explican en detalle en la medida necesaria para la comprensión de la síntesis de las diferentes formas de organización. Se han generado varios esquemas de coordinación variando los siguientes parámetros de diseño: la estructura organizativa, los mecanismos de coordinación y el soporte por TI. Las diferentes propuestas de organización desarrolladas son evaluadas por un método heurístico y otro basado en la simulación por eventos discretos. Para ambos métodos, se tienen en cuenta los principios de la teoría de la organización. La falta de rendimiento empresarial se debe a las dependencias entre actividades que no se gestionan adecuadamente. Dentro del método heurístico, se clasifican las dependencias y se mide su intensidad basándose en factores contextuales. A continuación, se valora la idoneidad de cada elemento de diseño organizativo para cada dependencia específica. Por último, cada forma de organización se evalúa basándose en la contribución de los elementos de diseño tanto al beneficio como al coste. El beneficio de coordinación se refiere a la mejora en el rendimiento logístico - este concepto es el objeto central en la mayoría de modelos de evaluación de la gestión de la cadena de suministro. Por el contrario, el coste de coordinación que se debe incurrir para lograr beneficios no se suele considerar en detalle. Procesos iterativos son costosos si se ejecutan manualmente. Este es el caso cuando SCM software no está implementada y el sistema ERP es el único instrumento de coordinación disponible. El modelo heurístico proporciona un procedimiento simplificado para la clasificación sistemática de las dependencias, la cuantificación de los factores de influencia y la identificación de configuraciones que indican el uso de formas organizativas y de soporte de TI más o menos complejas. La simulación de eventos discretos se aplica en el segundo modelo de evaluación utilizando el paquete de software ‘Plant Simulation’. Con respecto al rendimiento logístico, por un lado se mide el coste de fabricación, de inventario y de transporte y las penalizaciones por pérdida de ventas. Por otro lado, se cuantifica explícitamente el coste de la coordinación teniendo en cuenta los ciclos de coordinación iterativos. El método se aplica a una configuración de cadena de suministro ejemplar considerando diversos parámetros. Los resultados de la simulación confirman que, en la mayoría de los casos, el beneficio aumenta cuando se intensifica la coordinación. Sin embargo, en ciertas situaciones en las que se aplican ciclos de planificación manuales e iterativos el coste de coordinación adicional no siempre conduce a mejor rendimiento logístico. Estos resultados inesperados no se pueden atribuir a ningún parámetro particular. La investigación confirma la gran importancia de nuevas dimensiones hasta ahora ignoradas en la evaluación de propuestas organizativas y herramientas de TI. A través del método heurístico se puede comparar de forma rápida, pero sólo aproximada, la eficiencia de diferentes formas de organización. Por el contrario, el método de simulación es más complejo pero da resultados más detallados, teniendo en cuenta parámetros específicos del contexto del caso concreto y del diseño organizativo. ABSTRACT Transactional systems such as Enterprise Resource Planning (ERP) systems have been implemented widely while analytical software like Supply Chain Management (SCM) add-ons are adopted less by manufacturing companies. Although significant benefits are reported stemming from SCM software implementations, companies are reluctant to invest in such systems. On the one hand this is due to the lack of methods that are able to detect benefits from the use of SCM software and on the other hand associated costs are not identified, detailed and quantified sufficiently. Coordination schemes based only on ERP systems are valid alternatives in industrial practice because significant investment in IT can be avoided. Therefore, the evaluation of these coordination procedures, in particular the cost due to iterations, is of high managerial interest and corresponding methods are comprehensive tools for strategic IT decision making. The purpose of this research is to provide evaluation methods that allow the comparison of different organizational forms and software support levels. The research begins with a comprehensive introduction dealing with the business environment that industrial networks are facing and concludes highlighting the challenges for the supply chain software industry. Afterwards, the central terminology is addressed, focusing on organization theory, IT investment peculiarities and supply chain management software typology. The literature review classifies recent supply chain management research referring to organizational design and its software support. The classification encompasses criteria related to research methodology and content. Empirical studies from management science focus on network types and organizational fit. Novel planning algorithms and innovative coordination schemes are developed mostly in the field of operations research in order to propose new software features. Operations and production management researchers realize cost-benefit analysis of IT software implementations. The literature review reveals that the success of software solutions for network coordination depends strongly on the fit of three dimensions: network configuration, coordination scheme and software functionality. Reviewed literature is mostly centered on the benefits of SCM software implementations. However, ERP system based supply chain coordination is still widespread industrial practice but the associated coordination cost has not been addressed by researchers. Fundamentals of efficient organizational design are explained in detail as far as required for the understanding of the synthesis of different organizational forms. Several coordination schemes have been shaped through the variation of the following design parameters: organizational structuring, coordination mechanisms and software support. The different organizational proposals are evaluated using a heuristic approach and a simulation-based method. For both cases, the principles of organization theory are respected. A lack of performance is due to dependencies between activities which are not managed properly. Therefore, within the heuristic method, dependencies are classified and their intensity is measured based on contextual factors. Afterwards the suitability of each organizational design element for the management of a specific dependency is determined. Finally, each organizational form is evaluated based on the contribution of the sum of design elements to coordination benefit and to coordination cost. Coordination benefit refers to improvement in logistic performance – this is the core concept of most supply chain evaluation models. Unfortunately, coordination cost which must be incurred to achieve benefits is usually not considered in detail. Iterative processes are costly when manually executed. This is the case when SCM software is not implemented and the ERP system is the only available coordination instrument. The heuristic model provides a simplified procedure for the classification of dependencies, quantification of influence factors and systematic search for adequate organizational forms and IT support. Discrete event simulation is applied in the second evaluation model using the software package ‘Plant Simulation’. On the one hand logistic performance is measured by manufacturing, inventory and transportation cost and penalties for lost sales. On the other hand coordination cost is explicitly considered taking into account iterative coordination cycles. The method is applied to an exemplary supply chain configuration considering various parameter settings. The simulation results confirm that, in most cases, benefit increases when coordination is intensified. However, in some situations when manual, iterative planning cycles are applied, additional coordination cost does not always lead to improved logistic performance. These unexpected results cannot be attributed to any particular parameter. The research confirms the great importance of up to now disregarded dimensions when evaluating SCM concepts and IT tools. The heuristic method provides a quick, but only approximate comparison of coordination efficiency for different organizational forms. In contrast, the more complex simulation method delivers detailed results taking into consideration specific parameter settings of network context and organizational design.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este trabalho apresenta uma nova metodologia para elastografia virtual em imagens simuladas de ultrassom utilizando métodos numéricos e métodos de visão computacional. O objetivo é estimar o módulo de elasticidade de diferentes tecidos tendo como entrada duas imagens da mesma seção transversal obtidas em instantes de tempo e pressões aplicadas diferentes. Esta metodologia consiste em calcular um campo de deslocamento das imagens com um método de fluxo óptico e aplicar um método iterativo para estimar os módulos de elasticidade (análise inversa) utilizando métodos numéricos. Para o cálculo dos deslocamentos, duas formulações são utilizadas para fluxo óptico: Lucas-Kanade e Brox. A análise inversa é realizada utilizando duas técnicas numéricas distintas: o Método dos Elementos Finitos (MEF) e o Método dos Elementos de Contorno (MEC), sendo ambos implementados em Unidades de Processamento Gráfico de uso geral, GpGPUs ( \"General Purpose Graphics Units\" ). Considerando uma quantidade qualquer de materiais a serem determinados, para a implementação do Método dos Elementos de Contorno é empregada a técnica de sub-regiões para acoplar as matrizes de diferentes estruturas identificadas na imagem. O processo de otimização utilizado para determinar as constantes elásticas é realizado de forma semi-analítica utilizando cálculo por variáveis complexas. A metodologia é testada em três etapas distintas, com simulações sem ruído, simulações com adição de ruído branco gaussiano e phantoms matemáticos utilizando rastreamento de ruído speckle. Os resultados das simulações apontam o uso do MEF como mais preciso, porém computacionalmente mais caro, enquanto o MEC apresenta erros toleráveis e maior velocidade no tempo de processamento.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Réalisée sous la codirection de l'Université de Montréal (Anthropologie) et de l'Université du Québec à Chicoutimi (Aménagement du territoire)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Réalisée sous la codirection de l'Université de Montréal (Anthropologie) et de l'Université du Québec à Chicoutimi (Aménagement du territoire)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Permeability of the ocean crust is one of the most crucial parameters for constraining submarine fluid flow systems. Active hydrothermal fields are dynamic areas where fluid flow strongly affects the geochemistry and biology of the surrounding environment. There have been few permeability measurements in these regions, especially in felsic-hosted hydrothermal systems. We present a data set of 38 permeability and porosity measurements from the PACMANUS hydrothermal field, an actively venting, felsic hydrothermal field in the eastern Manus Basin. Permeability was measured using a complex transient method on 2.54-cm minicores. Permeability varies greatly between the samples, spanning over five orders of magnitude. Permeability decreases with both depth and decreasing porosity. When the alteration intensity of individual samples is considered, relationships between depth and porosity and permeability become more clearly defined. For incompletely altered samples (defined as >5% fresh rock), permeability and porosity are constant with depth. For completely altered samples (defined as <5% fresh rock), permeability and porosity decrease with depth. On average, the permeability values from the PACMANUS hydrothermal field are greater than those in other submarine environments using similar core-scale laboratory measurements; the average permeability, 4.5 x 10-16 m**2, is two to four orders of magnitude greater than in other areas. Although the core-scale permeability is higher than in other seafloor environments, it is still too low to obtain the fluid velocities observed in the PACMANUS hydrothermal field based on simplified analytical calculations. It is likely that core-scale permeability measurements are not representative of bulk rock permeability of the hydrothermal system overall, and that the latter is predominantly fracture controlled.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Multiple regression analysis is a complex statistical method with many potential uses. It has also become one of the most abused of all statistical procedures since anyone with a data base and suitable software can carry it out. An investigator should always have a clear hypothesis in mind before carrying out such a procedure and knowledge of the limitations of each aspect of the analysis. In addition, multiple regression is probably best used in an exploratory context, identifying variables that might profitably be examined by more detailed studies. Where there are many variables potentially influencing Y, they are likely to be intercorrelated and to account for relatively small amounts of the variance. Any analysis in which R squared is less than 50% should be suspect as probably not indicating the presence of significant variables. A further problem relates to sample size. It is often stated that the number of subjects or patients must be at least 5-10 times the number of variables included in the study.5 This advice should be taken only as a rough guide but it does indicate that the variables included should be selected with great care as inclusion of an obviously unimportant variable may have a significant impact on the sample size required.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Octopus Automated Perimeter was validated in a comparative study and found to offer many advantages in the assessment of the visual field. The visual evoked potential was investigated in an extensive study using a variety of stimulus parameters to simulate hemianopia and central visual field defects. The scalp topography was recorded topographically and a technique to compute the source derivation of the scalp potential was developed. This enabled clarification of the expected scalp distribution to half field stimulation using different electrode montages. The visual evoked potential following full field stimulation was found to be asymmetrical around the midline with a bias over the left occiput particularly when the foveal polar projections of the occipital cortex were preferentially stimulated. The half field response reflected the distribution asymmetry. Masking of the central 3° resulted in a response which was approximately symmetrical around the midline but there was no evidence of the PNP-complex. A method for visual field quantification was developed based on the neural representation of visual space (Drasdo and Peaston 1982) in an attempt to relate visual field depravation with the resultant visual evoked potentials. There was no form of simple, diffuse summation between the scalp potential and the cortical generators. It was, however, possible to quantify the degree of scalp potential attenuation for M-scaled full field stimuli. The results obtained from patients exhibiting pre-chiasmal lesions suggested that the PNP-complex is not scotomatous in nature but confirmed that it is most likely to be related to specific diseases (Harding and Crews 1982). There was a strong correlation between the percentage information loss of the visual field and the diagnostic value of the visual evoked potential in patients exhibiting chiasmal lesions.