861 resultados para Process Analytical Technology (PAT)
Resumo:
Supercritical Emulsion Extraction technology (SEE-C) was proposed for the production of poly-lactic-co-glycolic acid microcarriers. SEE-C operating parameters as pressure, temperature and flow rate ratios were analyzed and the process performance was optimized in terms of size distribution and encapsulation efficiency. Microdevices loaded with bovine serum insulin were produced with different sizes (2 and 3 µm) or insulin charges (3 and 6 mg/g) and with an encapsulation efficiency of 60%. The microcarriers were characterized in terms of insulin release profile in two different media (PBS and DMEM) and the diffusion and degradation constants were also estimated by using a mathematical model. PLGA microdevices were also used in a cultivation of embryonic ventricular myoblasts (cell line H9c2 obtained from rat) in a FBS serum free medium to monitor cell viability and growth in dependence of insulin released. Good cell viability and growth were observed on 3 µm microdevices loaded with 3 mg/g of insulin. PLGA microspheres loaded with growth factors (GFs) were charged into alginate scaffold with human Mesenchimal Steam Cells (hMSC) for bone tissue engineering with the aim of monitoring the effect of the local release of these signals on cells differentiation. These “living” 3D scaffolds were incubated in a direct perfusion tubular bioreactor to enhance nutrient transport and exposing the cells to a given shear stress. Different GFs such as, h-VEGF, h-BMP2 and a mix of two (ratio 1:1) were loaded and alginate beads were recovered from dynamic (tubular perfusion system bioreactor) and static culture at different time points (1st, 7th, 21st days) for the analytical assays such as, live/dead; alkaline phosphatase; osteocalcin; osteopontin and Van Kossa Immunoassay. The immunoassay confirmed always a better cells differentiation in the bioreactor with respect to the static culture and revealed a great influence of the BMP-2 released in the scaffold on cell differentiation.
Resumo:
This research work is aimed at the valorization of two types of pomace deriving from the extra virgin olive oil mechanical extraction process, such as olive pomace and a new by-product named “paté”, in the livestock sector as important sources of antioxidants and unsaturated fatty acids. In the first research the suitability of dried stoned olive pomace as a dietary supplement for dairy buffaloes was evaluated. The effectiveness of this utilization in modifying fatty acid composition and improving the oxidative stability of buffalo milk and mozzarella cheese have been proven by means of the analysis of qualitative and quantitative parameters. In the second research the use of paté as a new by-product in dietary feed supplementation for dairy ewes, already fed with a source of unsaturated fatty acids such as extruded linseed, was studied in order to assess the effect of this combination on the dairy products obtained. The characterization of paté as a new by-product was also carried out, studying the optimal conditions of its stabilization and preservation at the same time. The main results, common to both researches, have been the detection and the characterization of hydrophilic phenols in the milk. The analytical detection of hydroxytyrosol and tyrosol in the ewes’ milk fed with the paté and hydroxytyrosol in buffalo fed with pomace showed for the first time the presence in the milk of hydroxytyrosol, which is one of the most important bioactive compounds of the oil industry products; the transfer of these antioxidants and the proven improvement of the quality of milk fat could positively interact in the prevention of some human cardiovascular diseases and some tumours, increasing in this manner the quality of dairy products, also improving their shelf-life. These results also provide important information on the bioavailability of these phenolic compounds.
Resumo:
Il presente lavoro si rivolge all’analisi del ruolo delle forme metaforiche nella divulgazione della fisica contemporanea. Il focus è sugli aspetti cognitivi: come possiamo spiegare concetti fisici formalmente complessi ad un audience di non-esperti senza ‘snaturarne’ i significati disciplinari (comunicazione di ‘buona fisica’)? L’attenzione è sulla natura stessa della spiegazione e il problema riguarda la valutazione dell’efficacia della spiegazione scientifica a non-professionisti. Per affrontare tale questione, ci siamo orientati alla ricerca di strumenti formali che potessero supportarci nell’analisi linguistica dei testi. La nostra attenzione si è rivolta al possibile ruolo svolto dalle forme metaforiche nella costruzione di significati disciplinarmente validi. Si fa in particolare riferimento al ruolo svolto dalla metafora nella comprensione di nuovi significati a partire da quelli noti, aspetto fondamentale nel caso dei fenomeni di fisica contemporanea che sono lontani dalla sfera percettiva ordinaria. In particolare, è apparsa particolarmente promettente come strumento di analisi la prospettiva della teoria della metafora concettuale. Abbiamo allora affrontato il problema di ricerca analizzando diverse forme metaforiche di particolare rilievo prese da testi di divulgazione di fisica contemporanea. Nella tesi viene in particolare discussa l’analisi di un case-study dal punto di vista della metafora concettuale: una analogia di Schrödinger per la particella elementare. I risultati dell’analisi suggeriscono che la metafora concettuale possa rappresentare uno strumento promettente sia per la valutazione della qualità delle forme analogiche e metaforiche utilizzate nella spiegazione di argomenti di fisica contemporanea che per la creazione di nuove e più efficaci metafore. Inoltre questa prospettiva di analisi sembra fornirci uno strumento per caratterizzare il concetto stesso di ‘buona fisica’. Riteniamo infine che possano emergere altri risultati di ricerca interessanti approfondendo l’approccio interdisciplinare tra la linguistica e la fisica.
Resumo:
Die vorliegende Dissertation behandelt die Gesamtgesteinsanalyse stabiler Siliziumisotope mit Hilfe einer „Multi Collector-ICP-MS“. Die Analysen fanden in Kooperation mit dem „Royal Museum for Central Africa“ in Belgien statt. Einer der Schwerpunkte des ersten Kapitels ist die erstmalige Analyse des δ30Si –Wertes an einem konventionellen Nu PlasmaTM „Multi-Collector ICP-MS“ Instrument, durch die Eliminierung der den 30Si “peak” überlagernden 14N16O Interferenz. Die Analyse von δ30Si wurde durch technische Modifikationen der Anlage erreicht, welche eine höherer Massenauflösung ermöglichten. Die sorgsame Charakterisierung eines adäquaten Referenzmaterials ist unabdingbar für die Abschätzung der Genauigkeit einer Messung. Die Bestimmung der „U.S. Geological Survey“ Referenzmaterialien bildet den zweiten Schwerpunkt dieses Kapitales. Die Analyse zweier hawaiianischer Standards (BHVO-1 and BHVO-2), belegt die präzise und genaue δ30Si Bestimmung und bietet Vergleichsdaten als Qualitätskontrolle für andere Labore. Das zweite Kapitel befasst sich mit kombinierter Silizium-/Sauerstoffisotope zur Untersuchung der Entstehung der Silizifizierung vulkanischer Gesteine des „Barberton Greenstone Belt“, Südafrika. Im Gegensatz zu heute, war die Silizifizierung der Oberflächennahen Schichten, einschließlich der „Chert“ Bildung, weitverbreitete Prozesse am präkambrischen Ozeanboden. Diese Horizonte sind Zeugen einer extremen Siliziummobilisierung in der Frühzeit der Erde. Dieses Kapitel behandelt die Analyse von Silizium- und Sauerstoffisotopen an drei unterschiedlichen Gesteinsprofilen mit unterschiedlich stark silizifizierten Basalten und überlagernden geschichteten „Cherts“ der 3.54, 3.45 und 3.33 Mill. Jr. alten Theespruit, Kromberg und Hooggenoeg Formationen. Siliziumisotope, Sauerstoffisotope und die SiO2-Gehalte demonstrieren in allen drei Gesteinsprofilen eine positive Korrelation mit dem Silizifizierungsgrad, jedoch mit unterschiedlichen Steigungen der δ30Si-δ18O-Verhältnisse. Meerwasser wird als Quelle des Siliziums für den Silizifizierungsprozess betrachtet. Berechnungen haben gezeigt, dass eine klassische Wasser-Gestein Wechselwirkung die Siliziumisotopenvariation nicht beeinflussen kann, da die Konzentration von Si im Meerwasser zu gering ist (49 ppm). Die Daten stimmen mit einer Zwei-Endglieder-Komponentenmischung überein, mit Basalt und „Chert“ als jeweilige Endglieder. Unsere gegenwärtigen Daten an den „Cherts“ bestätigen einen Anstieg der Isotopenzusammensetzung über der Zeit. Mögliche Faktoren, die für unterschiedliche Steigungen der δ30Si-δ18O Verhältnisse verantwortlich sein könnten sind Veränderungen in der Meerwasserisotopie, der Wassertemperatur oder sekundäre Alterationseffekte. Das letzte Kapitel beinhaltet potentielle Variationen in der Quellregion archaischer Granitoide: die Si-Isotopen Perspektive. Natriumhaltige Tonalit-Trondhjemit-Granodiorit (TTG) Intrusiva repräsentieren große Anteile der archaischen Kruste. Im Gegensatz dazu ist die heutige Kruste kaliumhaltiger (GMS-Gruppe: Granit-Monzonite-Syenite). Prozesse, die zu dem Wechsel von natriumhaltiger zu kaliumhaltiger Kruste führten sind die Thematik diesen Kapitels. Siliziumisotopenmessungen wurden hier kombiniert mit Haupt- und Spurenelementanalysen an unterschiedlichen Generationen der 3.55 bis 3.10 Mill. Yr. alten TTG und GMS Intrusiva aus dem Arbeitsgebiet. Die δ30Si-Werte in den unterschiedlichen Plutonit Generationen zeigen einen leichten Anstieg der Isotopie mit der Zeit, wobei natriumhaltige Intrusiva die niedrigste Si-Isotopenzusammensetzung aufweisen. Der leichte Anstieg in der Siliziumisotopenzusammensetzung über die Zeit könnte auf unterschiedliche Temperaturbedingungen in der Quellregion der Granitoide hinweisen. Die Entstehung von Na-reichen, leichten d30Si Granitoiden würde demnach bei höheren Temperaturen erfolgen. Die Ähnlichkeit der δ30Si-Werte in archaischen K-reichen Plutoniten und phanerozoischen K-reichen Plutoniten wird ebenfalls deutlich.
Resumo:
Theories and numerical modeling are fundamental tools for understanding, optimizing and designing present and future laser-plasma accelerators (LPAs). Laser evolution and plasma wave excitation in a LPA driven by a weakly relativistically intense, short-pulse laser propagating in a preformed parabolic plasma channel, is studied analytically in 3D including the effects of pulse steepening and energy depletion. At higher laser intensities, the process of electron self-injection in the nonlinear bubble wake regime is studied by means of fully self-consistent Particle-in-Cell simulations. Considering a non-evolving laser driver propagating with a prescribed velocity, the geometrical properties of the non-evolving bubble wake are studied. For a range of parameters of interest for laser plasma acceleration, The dependence of the threshold for self-injection in the non-evolving wake on laser intensity and wake velocity is characterized. Due to the nonlinear and complex nature of the Physics involved, computationally challenging numerical simulations are required to model laser-plasma accelerators operating at relativistic laser intensities. The numerical and computational optimizations, that combined in the codes INF&RNO and INF&RNO/quasi-static give the possibility to accurately model multi-GeV laser wakefield acceleration stages with present supercomputing architectures, are discussed. The PIC code jasmine, capable of efficiently running laser-plasma simulations on Graphics Processing Units (GPUs) clusters, is presented. GPUs deliver exceptional performance to PIC codes, but the core algorithms had to be redesigned for satisfying the constraints imposed by the intrinsic parallelism of the architecture. The simulation campaigns, run with the code jasmine for modeling the recent LPA experiments with the INFN-FLAME and CNR-ILIL laser systems, are also presented.
Resumo:
This dissertation deals with the translations of seven books for children written by the Chicano author Pat Mora. I started to be interested in the Chicano world, a world suspended between Mexico and the United States, after reading a book by Sandra Cisneros. I decided to deepen my curiosity and for this reason, I discovered a hybrid reality full of history, culture and traditions. In this context, the language used is characterized by a continuous code switching between Spanish and English and I thought it was an interesting phenomenon from the literary and translation point of view. During my research in the Chicano culture, I ran across Pat Mora. Her books for children fascinated me because of their actual themes (the cultural diversity and the defense of identity) and their beautiful illustrations. For this reason, I chose to translate seven of her books because I believe they could be an enrichment for children literature in Italy. The work consists of five chapters. The first one deals with the identity of Chicano people, their history, their literature and their language. In the second chapter, I outline Pat Mora’s profile. I talk about her biography and I analyze her most famous works. In the third chapter, I introduce the seven books for children to be translated and I point out their plots and main themes. In the fourth chapter, I present the translation of the books. The fifth chapter is the translation comment. I deal with the linguistic analysis of the source texts and the analysis of the target texts focusing on the choices made during the translation process.
Resumo:
Model based calibration has gained popularity in recent years as a method to optimize increasingly complex engine systems. However virtually all model based techniques are applied to steady state calibration. Transient calibration is by and large an emerging technology. An important piece of any transient calibration process is the ability to constrain the optimizer to treat the problem as a dynamic one and not as a quasi-static process. The optimized air-handling parameters corresponding to any instant of time must be achievable in a transient sense; this in turn depends on the trajectory of the same parameters over previous time instances. In this work dynamic constraint models have been proposed to translate commanded to actually achieved air-handling parameters. These models enable the optimization to be realistic in a transient sense. The air handling system has been treated as a linear second order system with PD control. Parameters for this second order system have been extracted from real transient data. The model has been shown to be the best choice relative to a list of appropriate candidates such as neural networks and first order models. The selected second order model was used in conjunction with transient emission models to predict emissions over the FTP cycle. It has been shown that emission predictions based on air-handing parameters predicted by the dynamic constraint model do not differ significantly from corresponding emissions based on measured air-handling parameters.
Resumo:
With the publication of the quality guideline ICH Q9 "Quality Risk Management" by the International Conference on Harmonization, risk management has already become a standard requirement during the life cycle of a pharmaceutical product. Failure mode and effect analysis (FMEA) is a powerful risk analysis tool that has been used for decades in mechanical and electrical industries. However, the adaptation of the FMEA methodology to biopharmaceutical processes brings about some difficulties. The proposal presented here is intended to serve as a brief but nevertheless comprehensive and detailed guideline on how to conduct a biopharmaceutical process FMEA. It includes a detailed 1-to-10-scale FMEA rating table for occurrence, severity, and detectability of failures that has been especially designed for typical biopharmaceutical processes. The application for such a biopharmaceutical process FMEA is widespread. It can be useful whenever a biopharmaceutical manufacturing process is developed or scaled-up, or when it is transferred to a different manufacturing site. It may also be conducted during substantial optimization of an existing process or the development of a second-generation process. According to their resulting risk ratings, process parameters can be ranked for importance and important variables for process development, characterization, or validation can be identified. LAY ABSTRACT: Health authorities around the world ask pharmaceutical companies to manage risk during development and manufacturing of pharmaceuticals. The so-called failure mode and effect analysis (FMEA) is an established risk analysis tool that has been used for decades in mechanical and electrical industries. However, the adaptation of the FMEA methodology to pharmaceutical processes that use modern biotechnology (biopharmaceutical processes) brings about some difficulties, because those biopharmaceutical processes differ from processes in mechanical and electrical industries. The proposal presented here explains how a biopharmaceutical process FMEA can be conducted. It includes a detailed 1-to-10-scale FMEA rating table for occurrence, severity, and detectability of failures that has been especially designed for typical biopharmaceutical processes. With the help of this guideline, different details of the manufacturing process can be ranked according to their potential risks, and this can help pharmaceutical companies to identify aspects with high potential risks and to react accordingly to improve the safety of medicines.
Resumo:
Vibration serviceability is a widely recognized design criterion for assembly-type structures, such as stadiums, that are likely subjected to rhythmic human-induced excitation. Human-induced excitation of a structure occurs from the movement of the occupants such as walking, running, jumping, or dancing. Vibration serviceability is based on the level of comfort that people have with the vibrations of a structure. Current design guidance uses the natural frequency of the structure to assess vibration serviceability. However, a phenomenon known as human-structure interaction suggests that there is a dynamic interaction between the structure and passive occupants, altering the natural frequency of the system. Human-structure interaction is dependent on many factors, including the dynamic properties of the structure, posture of the occupants, and relative size of the crowd. It is unknown if the shift in natural frequency due to humanstructure interaction is significant enough to warrant consideration in the design process. This study explores the interface of both structural and crowd characteristics through experimental testing to determine if human-structure interaction should be considered because of its potential impact on serviceability assessment. An experimental test structure that represents the dynamic properties of a cantilevered stadium structure was designed and constructed. Experimental modal analysis was implemented to determine the dynamic properties of the empty test structure and when occupied with up to seven people arranged in different locations and postures. Comparisons of the dynamic properties were made between the empty and occupied testing configurations and analytical results from the use of a dynamic crowd model recommended from the Joint Working Group of Europe. Data trends lead to the development of a refined dynamic crowd model. This dynamic model can be used in conjunction with a finite element model of the test structure to estimate the dynamic influence due to human-structure interaction due to occupants standing with straight knees. In the future, the crowd model will be refined and can aid in assessing the dynamic properties of in-service stadium structures.
Resumo:
With the development of micro systems, there is an increasing demand for integrable porous materials. In addition to those conventional applications, such as filtration, wicking, and insulating, many new micro devices, including micro reactors, sensors, actuators, and optical components, can benefit from porous materials. Conventional porous materials, such as ceramics and polymers, however, cannot meet the challenges posed by micro systems, due to their incompatibility with standard micro-fabrication processes. In an effort to produce porous materials that can be used in micro systems, porous silicon (PS) generated by anodization of single crystalline silicon has been investigated. In this work, the PS formation process has been extensively studied and characterized as a function of substrate type, crystal orientation, doping concentration, current density and surfactant concentration and type. Anodization conditions have been optimized for producing very thick porous silicon layers with uniform pore size, and for obtaining ideal pore morphologies. Three different types of porous silicon materials: meso porous silicon, macro porous silicon with straight pores, and macro porous silicon with tortuous pores, have been successfully produced. Regular pore arrays with controllable pore size in the range of 2µm to 6µm have been demonstrated as well. Localized PS formation has been achieved by using oxide/nitride/polysilicon stack as masking materials, which can withstand anodization in hydrofluoric acid up to twenty hours. A special etching cell with electrolytic liquid backside contact along with two process flows has been developed to enable the fabrication of thick macro porous silicon membranes with though wafer pores. For device assembly, Si-Au and In-Au bonding technologies have been developed. Very low bonding temperature (~200 degrees C) and thick/soft bonding layers (~6µm) have been achieved by In-Au bondi ng technology, which is able to compensate the potentially rough surface on the porous silicon sample without introducing significant thermal stress. The application of the porous silicon material in micro systems has been demonstrated in a micro gas chromatograph system by two indispensable components: an integrated vapor source and an inlet filter, wherein porous silicon performs the basic functions of porous media: wicking and filtration. By utilizing a macro porous silicon wick, the calibration vapor source was able to produce a uniform and repeatable vapor generation for n-decane with less than a 0.1% variation in 9 hours, and less than a 0.5% variation in rate over 7 days. With engineered porous silicon membranes the inlet filter was able to show a depth filtration with nearly 100% collection efficiency for particles larger than 0.3µm in diameter, a low pressure-drop of 523Pa at 20sccm flow rate, and a filter capacity of 500µg/cm2.
Resumo:
The single-electron transistor (SET) is one of the best candidates for future nano electronic circuits because of its ultralow power consumption, small size and unique functionality. SET devices operate on the principle of Coulomb blockade, which is more prominent at dimensions of a few nano meters. Typically, the SET device consists of two capacitively coupled ultra-small tunnel junctions with a nano island between them. In order to observe the Coulomb blockade effects in a SET device the charging energy of the device has to be greater that the thermal energy. This condition limits the operation of most of the existing SET devices to cryogenic temperatures. Room temperature operation of SET devices requires sub-10nm nano-islands due to the inverse dependence of charging energy on the radius of the conducting nano-island. Fabrication of sub-10nm structures using lithography processes is still a technological challenge. In the present investigation, Focused Ion Beam based etch and deposition technology is used to fabricate single electron transistors devices operating at room temperature. The SET device incorporates an array of tungsten nano-islands with an average diameter of 8nm. The fabricated devices are characterized at room temperature and clear Coulomb blockade and Coulomb oscillations are observed. An improvement in the resolution limitation of the FIB etching process is demonstrated by optimizing the thickness of the active layer. SET devices with structural and topological variation are developed to explore their impact on the behavior of the device. The threshold voltage of the device was minimized to ~500mV by minimizing the source-drain gap of the device to 17nm. Vertical source and drain terminals are fabricated to realize single-dot based SET device. A unique process flow is developed to fabricate Si dot based SET devices for better gate controllability in the device characteristic. The device vi parameters of the fabricated devices are extracted by using a conductance model. Finally, characteristic of these devices are validated with the simulated data from theoretical modeling.
Resumo:
Business strategy is important to all organizations. Nearly all Fortune 500 firms are implementing Enterprise Resource Planning (ERP) systems to improve the execution of their business strategy and to improve integration with its information technology (IT) strategy. Successful implementation of these multi-million dollar software systems are requiring new emphasis on change management and on Business and IT strategic alignment. This paper examines business and IT strategic alignment and seeks to explore whether an ERP implementation can drive business process reengineering and business and IT strategic alignment. An overview of business strategy and strategic alignment are followed by an analysis of ERP. The “As-Is/To-Be” process model is then presented and explained as a simple, but vital tool for improving business strategy, strategic alignment, and ERP implementation success.
Resumo:
Electrospinning (ES) can readily produce polymer fibers with cross-sectional dimensions ranging from tens of nanometers to tens of microns. Qualitative estimates of surface area coverage are rather intuitive. However, quantitative analytical and numerical methods for predicting surface coverage during ES have not been covered in sufficient depth to be applied in the design of novel materials, surfaces, and devices from ES fibers. This article presents a modeling approach to ES surface coverage where an analytical model is derived for use in quantitative prediction of surface coverage of ES fibers. The analytical model is used to predict the diameter of circular deposition areas of constant field strength and constant electrostatic force. Experimental results of polyvinyl alcohol fibers are reported and compared to numerical models to supplement the analytical model derived. The analytical model provides scientists and engineers a method for estimating surface area coverage. Both applied voltage and capillary-to-collection-plate separation are treated as independent variables for the analysis. The electric field produced by the ES process was modeled using COMSOL Multiphysics software to determine a correlation between the applied field strength and the size of the deposition area of the ES fibers. MATLAB scripts were utilized to combine the numerical COMSOL results with derived analytical equations. Experimental results reinforce the parametric trends produced via modeling and lend credibility to the use of modeling techniques for the qualitative prediction of surface area coverage from ES. (Copyright: 2014 American Vacuum Society.)
Resumo:
Lipoproteins are a heterogeneous population of blood plasma particles composed of apolipoproteins and lipids. Lipoproteins transport exogenous and endogenous triglycerides and cholesterol from sites of absorption and formation to sites of storage and usage. Three major classes of lipoproteins are distinguished according to their density: high-density (HDL), low-density (LDL) and very low-density lipoproteins (VLDL). While HDLs contain mainly apolipoproteins of lower molecular weight, the two other classes contain apolipoprotein B and apolipoprotein (a) together with triglycerides and cholesterol. HDL concentrations were found to be inversely related to coronary heart disease and LDL/VLDL concentrations directly related. Although many studies have been published in this area, few have concentrated on the exact protein composition of lipoprotein particles. Lipoproteins were separated by density gradient ultracentrifugation into different subclasses. Native gel electrophoresis revealed different gel migration behaviour of the particles, with less dense particles having higher apparent hydrodynamic radii than denser particles. Apolipoprotein composition profiles were measured by matrix-assisted laser desorption/ionization-mass spectrometry on a macromizer instrument, equipped with the recently introduced cryodetector technology, and revealed differences in apolipoprotein composition between HDL subclasses. By combining these profiles with protein identifications from native and denaturing polyacrylamide gels by liquid chromatography-tandem mass spectrometry, we characterized comprehensively the exact protein composition of different lipoprotein particles. We concluded that the differential display of protein weight information acquired by macromizer mass spectrometry is an excellent tool for revealing structural variations of different lipoprotein particles, and hence the foundation is laid for the screening of cardiovascular disease risk factors associated with lipoproteins.
Resumo:
he notion of outsourcing – making arrangements with an external entity for the provision of goods or services to supplement or replace internal efforts – has been around for centuries. The outsourcing of information systems (IS) is however a much newer concept but one which has been growing dramatically. This book attempts to synthesize what is known about IS outsourcing by dividing the subject into three interrelated parts: (1) Traditional Information Technology Outsourcing, (2) Information Technolgy Offshoring, and (3) Business Process Outsourcing. The book should be of interest to all academics and students in the field of Information Systems as well as corporate executives and professionals who seek a more profound analysis and understanding of the underlying factors and mechanisms of outsourcing.