939 resultados para Process control automation device industry
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
In the first paper presented to you today by Dr. Spencer, an expert in the Animal Biology field and an official authority at the same time, you heard about the requirements imposed on a chemical in order to pass the different official hurdles before it ever will be accepted as a proven tool in wildlife management. Many characteristics have to be known and highly sophisticated tests have to be run. In many instances the governmental agency maintains its own screening, testing or analytical programs according to standard procedures. It would be impossible, however, for economic and time reasons to work out all the data necessary for themselves. They, therefore, depend largely on the information furnished by the individual industry which naturally has to be established as conscientiously as possible. This, among other things, Dr. Spencer has made very clear; and this is also what makes quite a few headaches for the individual industry, but I am certainly not speaking only for myself in saying that Industry fully realizes this important role in developing materials for vertebrate control and the responsibilities lying in this. This type of work - better to say cooperative work with the official institutions - is, however, only one part and for the most of it, the smallest part of work which Industry pays to the development of compounds for pest control. It actually refers only to those very few compounds which are known to be effective. But how to get to know about their properties in the first place? How does Industry make the selection from the many thousands of compounds synthesized each year? This, by far, creates the biggest problems, at least from the scientific and technical standpoint. Let us rest here for a short while and think about the possible ways of screening and selecting effective compounds. Basically there are two different ways. One is the empirical way of screening as big a number of compounds as possible under the supposition that with the number of incidences the chances for a "hit" increase, too. You can also call this type of approach the statistical or the analytical one, the mass screening of new, mostly unknown candidate materials. This type of testing can only be performed by a producer of many new materials,that means by big industries. It requires a tremendous investment in personnel, time and equipment and is based on highly simplified but indicative test methods, the results of which would have to be reliable and representative for practical purposes. The other extreme is the intellectual way of theorizing effective chemical configurations. Defenders of this method claim to now or later be able to predict biological effectiveness on the basis of the chemical structure or certain groups in it. Certain pre-experience should be necessary, that means knowledge of the importance of certain molecular requirements, then the detection of new and effective complete molecules is a matter of coordination to be performed by smart people or computers. You can also call this method the synthetical or coordinative method.
Resumo:
The nuisance wildlife control industry is rapidly expanding in New York State. To gain additional insight about this industry and the number of animals handled, we reviewed the 1989-90 annual logs submitted by Nuisance Wildlife Control Orators (NWC0s) to the New York State Department of Environmental Conservation (DEC). The specific objectives of this study were to determine: (1) the number and species of different wildlife responsible for damage incidents, (2) the cause of damage complaints, (3) the disposition of animals handled, (4) the location of damage events (i.e., urban, suburban, rural), and (5) an estimate of the economic impact of the nuisance wildlife industry in Upstate New York. The Nuisance Wildlife Logs (NWLs) were examined for 7 urban and 7 rural counties (25.5% of Upstate counties), and these data were used to estimate total NWCO activity in DEC Regions 3 through 9 (excludes Long Island). Approximately 75% of NWCOs licensed by DEC were active during 1989-90, and nearly 2,800 complaints were handled in the 14 counties sampled. More than 90% of complaints came from urban counties, and we estimated that NWC0s responded to more than 11,000 calls in Upstate New York. At a conservative estimate of $35/call, revenue generated by this industry exceeded $385,000 annually. Six wildlife species accounted for 85% of the nuisance complaints in urban and rural counties. During 1986 to 1993, the number of NWCOs licensed by DEC nearly quadrupled, and there is no indication that this trend will change in the near future.
Resumo:
Background: The gene YCL047C, which has been renamed promoter of filamentation gene (POF1), has recently been described as a cell component involved in yeast filamentous growth. The objective of this work is to understand the molecular and biological function of this gene. Results: Here, we report that the protein encoded by the POF1 gene, Pof1p, is an ATPase that may be part of the Saccharomyces cerevisiae protein quality control pathway. According to the results, Δpof1 cells showed increased sensitivity to hydrogen peroxide, tert-butyl hydroperoxide, heat shock and protein unfolding agents, such as dithiothreitol and tunicamycin. Besides, the overexpression of POF1 suppressed the sensitivity of Δpct1, a strain that lacks a gene that encodes a phosphocholine cytidylyltransferase, to heat shock. In vitro analysis showed, however, that the purified Pof1p enzyme had no cytidylyltransferase activity but does have ATPase activity, with catalytic efficiency comparable to other ATPases involved in endoplasmic reticulum-associated degradation of proteins (ERAD). Supporting these findings, co-immunoprecipitation experiments showed a physical interaction between Pof1p and Ubc7p (an ubiquitin conjugating enzyme) in vivo. Conclusions: Taken together, the results strongly suggest that the biological function of Pof1p is related to the regulation of protein degradation.
Resumo:
This paper presents the new active absorption wave basin, named Hydrodynamic Calibrator (HC), constructed at the University of São Paulo (USP), in the Laboratory facilities of the Numerical Offshore Tank (TPN). The square (14 m 14 m) tank is able to generate and absorb waves from 0.5 Hz to 2.0 Hz, by means of 148 active hinged flap wave makers. An independent mechanical system drives each flap by means of a 1HP servo-motor and a ball-screw based transmission system. A customized ultrasonic wave probe is installed in each flap, and is responsible for measuring wave elevation in the flap. A complex automation architecture was implemented, with three Programmable Logic Computers (PLCs), and a low-level software is responsible for all the interlocks and maintenance functions of the tank. Furthermore, all the control algorithms for the generation and absorption are implemented using higher level software (MATLAB /Simulink block diagrams). These algorithms calculate the motions of the wave makers both to generate and absorb the required wave field by taking into account the layout of the flaps and the limits of wave generation. The experimental transfer function that relates the flap amplitude to the wave elevation amplitude is used for the calculation of the motion of each flap. This paper describes the main features of the tank, followed by a detailed presentation of the whole automation system. It includes the measuring devices, signal conditioning, PLC and network architecture, real-time and synchronizing software and motor control loop. Finally, a validation of the whole automation system is presented, by means of the experimental analysis of the transfer function of the waves generated and the calculation of all the delays introduced by the automation system.
Resumo:
Motion control is a sub-field of automation, in which the position and/or velocity of machines are controlled using some type of device. In motion control the position, velocity, force, pressure, etc., profiles are designed in such a way that the different mechanical parts work as an harmonious whole in which a perfect synchronization must be achieved. The real-time exchange of information in the distributed system that is nowadays an industrial plant plays an important role in order to achieve always better performance, better effectiveness and better safety. The network for connecting field devices such as sensors, actuators, field controllers such as PLCs, regulators, drive controller etc., and man-machine interfaces is commonly called fieldbus. Since the motion transmission is now task of the communication system, and not more of kinematic chains as in the past, the communication protocol must assure that the desired profiles, and their properties, are correctly transmitted to the axes then reproduced or else the synchronization among the different parts is lost with all the resulting consequences. In this thesis, the problem of trajectory reconstruction in the case of an event-triggered communication system is faced. The most important feature that a real-time communication system must have is the preservation of the following temporal and spatial properties: absolute temporal consistency, relative temporal consistency, spatial consistency. Starting from the basic system composed by one master and one slave and passing through systems made up by many slaves and one master or many masters and one slave, the problems in the profile reconstruction and temporal properties preservation, and subsequently the synchronization of different profiles in network adopting an event-triggered communication system, have been shown. These networks are characterized by the fact that a common knowledge of the global time is not available. Therefore they are non-deterministic networks. Each topology is analyzed and the proposed solution based on phase-locked loops adopted for the basic master-slave case has been improved to face with the other configurations.
Resumo:
The aim of my dissertation is to provide new knowledge and applications of microfluidics in a variety of problems, from materials science, devices, and biomedicine, where the control on the fluid dynamics and the local concentration of the solutions containing the relevant molecules (either materials, precursors, or biomolecules) is crucial. The control of interfacial phenomena occurring in solutions at dierent length scales is compelling in nanotechnology for devising new sensors, molecular electronics devices, memories. Microfluidic devices were fabricated and integrated with organic electronics devices. The transduction involves the species in the solution which infills the transistor channel and confined by the microfluidic device. This device measures what happens on the surface, at few nanometers from the semiconductor channel. Soft-lithography was adopted to fabricate platinum electrodes, starting from platinum carbonyl precursor. I proposed a simple method to assemble these nanostructures in periodic arrays of microstripes, and form conductive electrodes with characteristic dimension of 600 nm. The conductivity of these sub-microwires is compared with the values reported in literature and bulk platinum. The process is suitable for fabricating thin conductive patterns for electronic devices or electrochemical cells, where the periodicity of the conductive pattern is comparable with the diusion length of the molecules in solution. The ordering induced among artificial nanostructures is of particular interest in science. I show that large building blocks, like carbon nanotubes or core-shell nanoparticles, can be ordered and self-organised on a surface in patterns due to capillary forces. The eective probability of inducing order with microfluidic flow is modeled with finite element calculation on the real geometry of the microcapillaries, in soft-lithographic process. The oligomerization of A40 peptide in microconfined environment represents a new investigation of the extensively studied peptide aggregation. The added value of the approach I devised is the precise control on the local concentration of peptides together with the possibility to mimick cellular crowding. Four populations of oligomers where distinguished, with diameters ranging from 15 to 200 nm. These aggregates could not be addresses separately in fluorescence. The statistical analysis on the atomic force microscopy images together with a model of growth reveal new insights on the kinetics of amyloidogenesis as well as allows me to identify the minimum stable nucleus size. This is an important result owing to its implications in the understanding and early diagnosis and therapy of the Alzheimer’s disease
Resumo:
Electronic devices based on organic semiconductors have gained increased attention in nanotechnology, especially applicable to the field of field-effect transistors and photovoltaic. A promising class of materials in this reseach field are polycyclic aromatic hydrocarbons (PAHs). Alkyl substitution of these graphenes results in the selforganization into one-dimensional columnar superstructures and provides solubility and processibility. The nano-phase separation between the π-stacking aromatic cores and the disordered peripheral alkyl chains leads to the formation of thermotropic mesophases. Hexa-peri-hexabenzocoronenes (HBC), as an example for a PAH, exhibits some of the highest values for the charge carrier mobility for mesogens, which makes them promising candidates for electronic devices. Prerequisites for efficient charge carrier transport between electrodes are a high purity of the material to reduce possible trapping sites for charge carriers and a pronounced and defect-free, long-range order. Appropriate processing techniques are required to induce a high degree of aligned structures in the discotic material over macroscopic dimensions. Highly-ordered supramolecular structures of different discotics, in particular, of HBC derivatives have been obtained by solution processing using the zone-casting technique, zone-melting or simple extrusion. Simplicity and fabrication of highly oriented columnar structures over long-range are the most essential advantages of these zone-processing methods. A close relation between the molecular design, self-aggregation and the processing conditions has been revealed. The long-range order achieved by the zone-casting proved to be suitable for field effect transistors (FET).
Resumo:
In this thesis effects of plasma actuators based on Dielectric Barrier Discharge (DBD) technology over a NACA 0015 bidimensional airfoil have been analyzed in an experimental way, at low Reynolds number. Work developed on thesis has been carried on in partnership with the Department of Electrical Engineering of Università di Bologna, inside Wind Tunnel of the Applied Aerodynamic Laboratory of Aerospace Engineering faculty. In order to verify the effectiveness of these active control devices, the analysis has shown how actuators succeed in prevent boundary layer separation only in certain conditions af angle of attack and Reynolds numbers. Moreover, in this thesis actuators’ chordwise position effect has been also analyzed, together with the influence of steady and unsteady operations.
Resumo:
Over the past 15 years the Italian brewing scene showed interesting changes, especially with regard to the creation of many breweries with an annual production of less than 10,000 hectoliters. The beers produced by microbreweries are very susceptible to attack by spoilage micro-organisms that cause the deterioration of beer quality characteristics. In addition, most of the microbreweries do not practice heat treatments of stabilization and do not carry out quality checks on the product. The high presence of beer spoilage bacteria is an economic problem for the brewing industry because it can damage the brand and it causes high costs of product retrieval. This thesis project was aimed to study the management of the production process in the Italian microbreweries within a production less than 10,000 hl. In particular, the annual production, type of plant, yeast management, process management, cleaning and sanitizing of a representative sample of microbreweries were investigated. Furthermore was made a collection of samples in order to identify, with simple methods, what are spoilage bacteria more present in the Italian craft beers. 21% of the beers analysed were positive at the presence of lactic acid bacteria. These analytical data show the importance of understanding what are the weak points of the production process that cause the development of spoilage bacteria. Finally, the thesis examined the actual production of two microbreweries in order to understand the process management that can promote the growth of spoilage bacteria in beer and production plant. The analysis of the data for the two case studies was helpful to understand what are the critical points where the microorganisms are most frequently in contact with the product. The hygiene practices are crucial to ensure the quality of the finished product, especially in the case of non-pasteurized beer.
Resumo:
In der Herstellung fester Darreichungsformen umfasst die Granulierung einen komplexen Teilprozess mit hoher Relevanz für die Qualität des pharmazeutischen Produktes. Die Wirbelschichtgranulierung ist ein spezielles Granulierverfahren, welches die Teilprozesse Mischen, Agglomerieren und Trocknen in einem Gerät vereint. Durch die Kombination mehrerer Prozessstufen unterliegt gerade dieses Verfahren besonderen Anforderungen an ein umfassendes Prozessverständnis. Durch die konsequente Verfolgung des PAT- Ansatzes, welcher im Jahre 2004 durch die amerikanische Zulassungsbehörde (FDA) als Guideline veröffentlicht wurde, wurde der Grundstein für eine kontinuierliche Prozessverbesserung durch erhöhtes Prozessverständnis, für Qualitätserhöhung und Kostenreduktion gegeben. Die vorliegende Arbeit befasste sich mit der Optimierung der Wirbelschicht-Granulationsprozesse von zwei prozesssensiblen Arzneistoffformulierungen, unter Verwendung von PAT. rnFür die Enalapril- Formulierung, einer niedrig dosierten und hochaktiven Arzneistoffrezeptur, wurde herausgefunden, dass durch eine feinere Zerstäubung der Granulierflüssigkeit deutlich größere Granulatkörnchen erhalten werden. Eine Erhöhung der MassRatio verringert die Tröpfchengröße, dies führt zu größeren Granulaten. Sollen Enalapril- Granulate mit einem gewünschten D50-Kornverteilung zwischen 100 und 140 um hergestellt werden, dann muss die MassRatio auf hohem Niveau eingestellt werden. Sollen Enalapril- Granulate mit einem D50- Wert zwischen 80 und 120µm erhalten werden, so muss die MassRatio auf niedrigem Niveau eingestellt sein. Anhand der durchgeführten Untersuchungen konnte gezeigt werden, dass die MassRatio ein wichtiger Parameter ist und zur Steuerung der Partikelgröße der Enalapril- Granulate eingesetzt werden kann; unter der Voraussetzung dass alle anderen Prozessparameter konstant gehalten werden.rnDie Betrachtung der Schnittmengenplots gibt die Möglichkeit geeignete Einstellungen der Prozessparameter bzw. Einflussgrößen zu bestimmen, welche dann zu den gewünschten Granulat- und Tabletteneigenschaften führen. Anhand der Lage und der Größe der Schnittmenge können die Grenzen der Prozessparameter zur Herstellung der Enalapril- Granulate bestimmt werden. Werden die Grenzen bzw. der „Design Space“ der Prozessparameter eingehalten, kann eine hochwertige Produktqualität garantiert werden. rnUm qualitativ hochwertige Enalapril Tabletten mit der gewählten Formulierung herzustellen, sollte die Enalapril- Granulation mit folgenden Prozessparametern durchgeführt werden: niedrige Sprührate, hoher MassRatio, einer Zulufttemperatur von mindestens > 50 °C und einer effektiven Zuluftmenge < 180 Nm³/h. Wird hingegen eine Sprührate von 45 g/min und eine mittlere MassRatio von 4.54 eingestellt, so muss die effektive Zuluftmenge mindestens 200 Nm³/h und die Zulufttemperatur mindestens 60 °C betragen, um eine vorhersagbar hohe Tablettenqualität zu erhalten. Qualität wird in das Arzneimittel bereits während der Herstellung implementiert, indem die Prozessparameter bei der Enalapril- Granulierung innerhalb des „Design Space“ gehalten werden.rnFür die Metformin- Formulierung, einer hoch dosierten aber wenig aktiven Arzneistoffrezeptur wurde herausgefunden, dass sich der Wachstumsmechanismus des Feinanteils der Metformin- Granulate von dem Wachstumsmechanismus der D50- und D90- Kornverteilung unterscheidet. Der Wachstumsmechanismus der Granulate ist abhängig von der Partikelbenetzung durch die versprühten Flüssigkeitströpfchen und vom Größenverhältnis von Partikel zu Sprühtröpfchen. Der Einfluss der MassRatio ist für die D10- Kornverteilung der Granulate vernachlässigbar klein. rnMit Hilfe der Störgrößen- Untersuchungen konnte eine Regeleffizienz der Prozessparameter für eine niedrig dosierte (Enalapril)- und eine hoch dosierte (Metformin) Arzneistoffformulierung erarbeitet werden, wodurch eine weitgehende Automatisierung zur Verringerung von Fehlerquellen durch Nachregelung der Störgrößen ermöglicht wird. Es ergibt sich für die gesamte Prozesskette ein in sich geschlossener PAT- Ansatz. Die Prozessparameter Sprührate und Zuluftmenge erwiesen sich als am besten geeignet. Die Nachregelung mit dem Parameter Zulufttemperatur erwies sich als träge. rnFerner wurden in der Arbeit Herstellverfahren für Granulate und Tabletten für zwei prozesssensible Wirkstoffe entwickelt. Die Robustheit der Herstellverfahren gegenüber Störgrößen konnte demonstriert werden, wodurch die Voraussetzungen für eine Echtzeitfreigabe gemäß dem PAT- Gedanken geschaffen sind. Die Kontrolle der Qualität des Produkts findet nicht am Ende der Produktions- Prozesskette statt, sondern die Kontrolle wird bereits während des Prozesses durchgeführt und basiert auf einem besseren Verständnis des Produktes und des Prozesses. Außerdem wurde durch die konsequente Verfolgung des PAT- Ansatzes die Möglichkeit zur kontinuierlichen Prozessverbesserung, zur Qualitätserhöhung und Kostenreduktion gegeben und damit das ganzheitliche Ziel des PAT- Gedankens erreicht und verwirklicht.rn
Resumo:
Die Arbeit beschäftigt sich mit der Kontrolle von Selbstorganisation und Mikrostruktur von organischen Halbleitern und deren Einsatz in OFETs. In Kapiteln 3, 4 und 5 eine neue Lösungsmittel-basierte Verabeitungsmethode, genannt als Lösungsmitteldampfdiffusion, ist konzipiert, um die Selbstorganisation von Halbleitermolekülen auf der Oberfläche zu steuern. Diese Methode als wirkungsvolles Werkzeug erlaubt eine genaue Kontrolle über die Mikrostruktur, wie in Kapitel 3 am Beispiel einer D-A Dyad bestehend aus Hexa-peri-hexabenzocoronene (HBC) als Donor und Perylene Diimide (PDI) als Akzeptor beweisen. Die Kombination aus Oberflächenmodifikation und Lösungsmitteldampf kann die Entnetzungseffekte ausgleichen, so dass die gewüschte Mikrostruktur und molekulare Organisation auf der Oberfläche erreicht werden kann. In Kapiteln 4 und 5 wurde diese Methode eingesetzt, um die Selbstorganisation von Dithieno[2, 3-d;2’, 3’-d’] benzo[1,2-b;4,5-b’]dithiophene (DTBDT) und Cyclopentadithiophene -benzothiadiazole copolymer (CDT-BTZ) Copolymer zu steuern. Die Ergebnisse könnten weitere Studien stimulieren und werfen Licht aus andere leistungsfaähige konjugierte Polymere. rnIn Kapiteln 6 und 7 Monolagen und deren anschlieβende Mikrostruktur von zwei konjugierten Polymeren, Poly (2,5-bis(3-alkylthiophen-2-yl)thieno[3,2-b]thiophene) PBTTT und Poly{[N,N ′-bis(2-octyldodecyl)-naphthalene-1,4,5,8-bis (dicarboximide)-2,6-diyl]-alt-5,5′- (2,2′-bithiophene)}, P(NDI2OD-T2)) wurden auf steife Oberflächen mittels Tauchbeschichtung aufgebracht. Da sist das erste Mal, dass es gelungen ist, Polymer Monolagen aus der Lösung aufzubringen. Dieser Ansatz kann weiter auf eine breite Reihe von anderen konjugierten Polymeren ausgeweitet werden.rnIn Kapitel 8 wurden PDI-CN2 Filme erfolgreich von Monolagen zu Bi- und Tri-Schichten auf Oberflächen aufgebracht, die unterschiedliche Rauigkeiten besitzen. Für das erste Mal, wurde der Einfluss der Rauigkeit auf Lösungsmittel-verarbeitete dünne Schichten klar beschrieben.rn
Resumo:
This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.
Resumo:
This is the second part of a study investigating a model-based transient calibration process for diesel engines. The first part addressed the data requirements and data processing required for empirical transient emission and torque models. The current work focuses on modelling and optimization. The unexpected result of this investigation is that when trained on transient data, simple regression models perform better than more powerful methods such as neural networks or localized regression. This result has been attributed to extrapolation over data that have estimated rather than measured transient air-handling parameters. The challenges of detecting and preventing extrapolation using statistical methods that work well with steady-state data have been explained. The concept of constraining the distribution of statistical leverage relative to the distribution of the starting solution to prevent extrapolation during the optimization process has been proposed and demonstrated. Separate from the issue of extrapolation is preventing the search from being quasi-static. Second-order linear dynamic constraint models have been proposed to prevent the search from returning solutions that are feasible if each point were run at steady state, but which are unrealistic in a transient sense. Dynamic constraint models translate commanded parameters to actually achieved parameters that then feed into the transient emission and torque models. Combined model inaccuracies have been used to adjust the optimized solutions. To frame the optimization problem within reasonable dimensionality, the coefficients of commanded surfaces that approximate engine tables are adjusted during search iterations, each of which involves simulating the entire transient cycle. The resulting strategy, different from the corresponding manual calibration strategy and resulting in lower emissions and efficiency, is intended to improve rather than replace the manual calibration process.