950 resultados para Process parameters
Resumo:
CO(15NH2)2 enriched with the stable isotope 15N was synthesized based on a reaction involving CO, 15NH3, and S in the presence of CH3OH. The method differs from the industrial method; a stainless steel reactor internally lined with polytetrafluoroethylene (PTFE) was used in a discontinuous process under low pressure and temperature. The yield of the synthesis was evaluated as a function of the parameters: the amount of reagents, reaction time, addition of H2S, liquid solution and reaction temperature. The results showed that under optimum conditions (1.36, 4.01, and 4.48 g of 15NH3, CO, and S, respectively, 40 ml CH3OH, 40 mg H2S, 100 ºC and 120 min of reaction) 1.82 g (yield 76.5%) of the compound was obtained per batch. The synthesized CO(15NH2)2 contained 46.2% N, 0.55% biuret, melting point of 132.55 ºC and did not exhibit isotopic fractionation. The production cost of CO(15NH2)2 with 90.0 at. % 15N was US$ 238.60 per gram.
Resumo:
Abstract Background Biofuels produced from sugarcane bagasse (SB) have shown promising results as a suitable alternative of gasoline. Biofuels provide unique, strategic, environmental and socio-economic benefits. However, production of biofuels from SB has negative impact on environment due to the use of harsh chemicals during pretreatment. Consecutive sulfuric acid-sodium hydroxide pretreatment of SB is an effective process which eventually ameliorates the accessibility of cellulase towards cellulose for the sugars production. Alkaline hydrolysate of SB is black liquor containing high amount of dissolved lignin. Results This work evaluates the environmental impact of residues generated during the consecutive acid-base pretreatment of SB. Advanced oxidative process (AOP) was used based on photo-Fenton reaction mechanism (Fenton Reagent/UV). Experiments were performed in batch mode following factorial design L9 (Taguchi orthogonal array design of experiments), considering the three operation variables: temperature (°C), pH, Fenton Reagent (Fe2+/H2O2) + ultraviolet. Reduction of total phenolics (TP) and total organic carbon (TOC) were responsive variables. Among the tested conditions, experiment 7 (temperature, 35°C; pH, 2.5; Fenton reagent, 144 ml H2O2+153 ml Fe2+; UV, 16W) revealed the maximum reduction in TP (98.65%) and TOC (95.73%). Parameters such as chemical oxygen demand (COD), biochemical oxygen demand (BOD), BOD/COD ratio, color intensity and turbidity also showed a significant change in AOP mediated lignin solution than the native alkaline hydrolysate. Conclusion AOP based on Fenton Reagent/UV reaction mechanism showed efficient removal of TP and TOC from sugarcane bagasse alkaline hydrolysate (lignin solution). To the best of our knowledge, this is the first report on statistical optimization of the removal of TP and TOC from sugarcane bagasse alkaline hydrolysate employing Fenton reagent mediated AOP process.
Resumo:
Subduction zones are the favorite places to generate tsunamigenic earthquakes, where friction between oceanic and continental plates causes the occurrence of a strong seismicity. The topics and the methodologies discussed in this thesis are focussed to the understanding of the rupture process of the seismic sources of great earthquakes that generate tsunamis. The tsunamigenesis is controlled by several kinematical characteristic of the parent earthquake, as the focal mechanism, the depth of the rupture, the slip distribution along the fault area and by the mechanical properties of the source zone. Each of these factors plays a fundamental role in the tsunami generation. Therefore, inferring the source parameters of tsunamigenic earthquakes is crucial to understand the generation of the consequent tsunami and so to mitigate the risk along the coasts. The typical way to proceed when we want to gather information regarding the source process is to have recourse to the inversion of geophysical data that are available. Tsunami data, moreover, are useful to constrain the portion of the fault area that extends offshore, generally close to the trench that, on the contrary, other kinds of data are not able to constrain. In this thesis I have discussed the rupture process of some recent tsunamigenic events, as inferred by means of an inverse method. I have presented the 2003 Tokachi-Oki (Japan) earthquake (Mw 8.1). In this study the slip distribution on the fault has been inferred by inverting tsunami waveform, GPS, and bottom-pressure data. The joint inversion of tsunami and geodetic data has revealed a much better constrain for the slip distribution on the fault rather than the separate inversions of single datasets. Then we have studied the earthquake occurred on 2007 in southern Sumatra (Mw 8.4). By inverting several tsunami waveforms, both in the near and in the far field, we have determined the slip distribution and the mean rupture velocity along the causative fault. Since the largest patch of slip was concentrated on the deepest part of the fault, this is the likely reason for the small tsunami waves that followed the earthquake, pointing out how much the depth of the rupture plays a crucial role in controlling the tsunamigenesis. Finally, we have presented a new rupture model for the great 2004 Sumatra earthquake (Mw 9.2). We have performed the joint inversion of tsunami waveform, GPS and satellite altimetry data, to infer the slip distribution, the slip direction, and the rupture velocity on the fault. Furthermore, in this work we have presented a novel method to estimate, in a self-consistent way, the average rigidity of the source zone. The estimation of the source zone rigidity is important since it may play a significant role in the tsunami generation and, particularly for slow earthquakes, a low rigidity value is sometimes necessary to explain how a relatively low seismic moment earthquake may generate significant tsunamis; this latter point may be relevant for explaining the mechanics of the tsunami earthquakes, one of the open issues in present day seismology. The investigation of these tsunamigenic earthquakes has underlined the importance to use a joint inversion of different geophysical data to determine the rupture characteristics. The results shown here have important implications for the implementation of new tsunami warning systems – particularly in the near-field – the improvement of the current ones, and furthermore for the planning of the inundation maps for tsunami-hazard assessment along the coastal area.
Resumo:
In such territories where food production is mostly scattered in several small / medium size or even domestic farms, a lot of heterogeneous residues are produced yearly, since farmers usually carry out different activities in their properties. The amount and composition of farm residues, therefore, widely change during year, according to the single production process periodically achieved. Coupling high efficiency micro-cogeneration energy units with easy handling biomass conversion equipments, suitable to treat different materials, would provide many important advantages to the farmers and to the community as well, so that the increase in feedstock flexibility of gasification units is nowadays seen as a further paramount step towards their wide spreading in rural areas and as a real necessity for their utilization at small scale. Two main research topics were thought to be of main concern at this purpose, and they were therefore discussed in this work: the investigation of fuels properties impact on gasification process development and the technical feasibility of small scale gasification units integration with cogeneration systems. According to these two main aspects, the present work was thus divided in two main parts. The first one is focused on the biomass gasification process, that was investigated in its theoretical aspects and then analytically modelled in order to simulate thermo-chemical conversion of different biomass fuels, such as wood (park waste wood and softwood), wheat straw, sewage sludge and refuse derived fuels. The main idea is to correlate the results of reactor design procedures with the physical properties of biomasses and the corresponding working conditions of gasifiers (temperature profile, above all), in order to point out the main differences which prevent the use of the same conversion unit for different materials. At this scope, a gasification kinetic free model was initially developed in Excel sheets, considering different values of air to biomass ratio and the downdraft gasification technology as particular examined application. The differences in syngas production and working conditions (process temperatures, above all) among the considered fuels were tried to be connected to some biomass properties, such elementary composition, ash and water contents. The novelty of this analytical approach was the use of kinetic constants ratio in order to determine oxygen distribution among the different oxidation reactions (regarding volatile matter only) while equilibrium of water gas shift reaction was considered in gasification zone, by which the energy and mass balances involved in the process algorithm were linked together, as well. Moreover, the main advantage of this analytical tool is the easiness by which the input data corresponding to the particular biomass materials can be inserted into the model, so that a rapid evaluation on their own thermo-chemical conversion properties is possible to be obtained, mainly based on their chemical composition A good conformity of the model results with the other literature and experimental data was detected for almost all the considered materials (except for refuse derived fuels, because of their unfitting chemical composition with the model assumptions). Successively, a dimensioning procedure for open core downdraft gasifiers was set up, by the analysis on the fundamental thermo-physical and thermo-chemical mechanisms which are supposed to regulate the main solid conversion steps involved in the gasification process. Gasification units were schematically subdivided in four reaction zones, respectively corresponding to biomass heating, solids drying, pyrolysis and char gasification processes, and the time required for the full development of each of these steps was correlated to the kinetics rates (for pyrolysis and char gasification processes only) and to the heat and mass transfer phenomena from gas to solid phase. On the basis of this analysis and according to the kinetic free model results and biomass physical properties (particles size, above all) it was achieved that for all the considered materials char gasification step is kinetically limited and therefore temperature is the main working parameter controlling this step. Solids drying is mainly regulated by heat transfer from bulk gas to the inner layers of particles and the corresponding time especially depends on particle size. Biomass heating is almost totally achieved by the radiative heat transfer from the hot walls of reactor to the bed of material. For pyrolysis, instead, working temperature, particles size and the same nature of biomass (through its own pyrolysis heat) have all comparable weights on the process development, so that the corresponding time can be differently depending on one of these factors according to the particular fuel is gasified and the particular conditions are established inside the gasifier. The same analysis also led to the estimation of reaction zone volumes for each biomass fuel, so as a comparison among the dimensions of the differently fed gasification units was finally accomplished. Each biomass material showed a different volumes distribution, so that any dimensioned gasification unit does not seem to be suitable for more than one biomass species. Nevertheless, since reactors diameters were found out quite similar for all the examined materials, it could be envisaged to design a single units for all of them by adopting the largest diameter and by combining together the maximum heights of each reaction zone, as they were calculated for the different biomasses. A total height of gasifier as around 2400mm would be obtained in this case. Besides, by arranging air injecting nozzles at different levels along the reactor, gasification zone could be properly set up according to the particular material is in turn gasified. Finally, since gasification and pyrolysis times were found to considerably change according to even short temperature variations, it could be also envisaged to regulate air feeding rate for each gasified material (which process temperatures depend on), so as the available reactor volumes would be suitable for the complete development of solid conversion in each case, without even changing fluid dynamics behaviour of the unit as well as air/biomass ratio in noticeable measure. The second part of this work dealt with the gas cleaning systems to be adopted downstream the gasifiers in order to run high efficiency CHP units (i.e. internal engines and micro-turbines). Especially in the case multi–fuel gasifiers are assumed to be used, weightier gas cleaning lines need to be envisaged in order to reach the standard gas quality degree required to fuel cogeneration units. Indeed, as the more heterogeneous feed to the gasification unit, several contaminant species can simultaneously be present in the exit gas stream and, as a consequence, suitable gas cleaning systems have to be designed. In this work, an overall study on gas cleaning lines assessment is carried out. Differently from the other research efforts carried out in the same field, the main scope is to define general arrangements for gas cleaning lines suitable to remove several contaminants from the gas stream, independently on the feedstock material and the energy plant size The gas contaminant species taken into account in this analysis were: particulate, tars, sulphur (in H2S form), alkali metals, nitrogen (in NH3 form) and acid gases (in HCl form). For each of these species, alternative cleaning devices were designed according to three different plant sizes, respectively corresponding with 8Nm3/h, 125Nm3/h and 350Nm3/h gas flows. Their performances were examined on the basis of their optimal working conditions (efficiency, temperature and pressure drops, above all) and their own consumption of energy and materials. Successively, the designed units were combined together in different overall gas cleaning line arrangements, paths, by following some technical constraints which were mainly determined from the same performance analysis on the cleaning units and from the presumable synergic effects by contaminants on the right working of some of them (filters clogging, catalysts deactivation, etc.). One of the main issues to be stated in paths design accomplishment was the tars removal from the gas stream, preventing filters plugging and/or line pipes clogging At this scope, a catalytic tars cracking unit was envisaged as the only solution to be adopted, and, therefore, a catalytic material which is able to work at relatively low temperatures was chosen. Nevertheless, a rapid drop in tars cracking efficiency was also estimated for this same material, so that an high frequency of catalysts regeneration and a consequent relevant air consumption for this operation were calculated in all of the cases. Other difficulties had to be overcome in the abatement of alkali metals, which condense at temperatures lower than tars, but they also need to be removed in the first sections of gas cleaning line in order to avoid corrosion of materials. In this case a dry scrubber technology was envisaged, by using the same fine particles filter units and by choosing for them corrosion resistant materials, like ceramic ones. Besides these two solutions which seem to be unavoidable in gas cleaning line design, high temperature gas cleaning lines were not possible to be achieved for the two larger plant sizes, as well. Indeed, as the use of temperature control devices was precluded in the adopted design procedure, ammonia partial oxidation units (as the only considered methods for the abatement of ammonia at high temperature) were not suitable for the large scale units, because of the high increase of reactors temperature by the exothermic reactions involved in the process. In spite of these limitations, yet, overall arrangements for each considered plant size were finally designed, so that the possibility to clean the gas up to the required standard degree was technically demonstrated, even in the case several contaminants are simultaneously present in the gas stream. Moreover, all the possible paths defined for the different plant sizes were compared each others on the basis of some defined operational parameters, among which total pressure drops, total energy losses, number of units and secondary materials consumption. On the basis of this analysis, dry gas cleaning methods proved preferable to the ones including water scrubber technology in al of the cases, especially because of the high water consumption provided by water scrubber units in ammonia adsorption process. This result is yet connected to the possibility to use activated carbon units for ammonia removal and Nahcolite adsorber for chloride acid. The very high efficiency of this latter material is also remarkable. Finally, as an estimation of the overall energy loss pertaining the gas cleaning process, the total enthalpy losses estimated for the three plant sizes were compared with the respective gas streams energy contents, these latter obtained on the basis of low heating value of gas only. This overall study on gas cleaning systems is thus proposed as an analytical tool by which different gas cleaning line configurations can be evaluated, according to the particular practical application they are adopted for and the size of cogeneration unit they are connected to.
Resumo:
This thesis analyses problems related to the applicability, in business environments, of Process Mining tools and techniques. The first contribution is a presentation of the state of the art of Process Mining and a characterization of companies, in terms of their "process awareness". The work continues identifying circumstance where problems can emerge: data preparation; actual mining; and results interpretation. Other problems are the configuration of parameters by not-expert users and computational complexity. We concentrate on two possible scenarios: "batch" and "on-line" Process Mining. Concerning the batch Process Mining, we first investigated the data preparation problem and we proposed a solution for the identification of the "case-ids" whenever this field is not explicitly indicated. After that, we concentrated on problems at mining time and we propose the generalization of a well-known control-flow discovery algorithm in order to exploit non instantaneous events. The usage of interval-based recording leads to an important improvement of performance. Later on, we report our work on the parameters configuration for not-expert users. We present two approaches to select the "best" parameters configuration: one is completely autonomous; the other requires human interaction to navigate a hierarchy of candidate models. Concerning the data interpretation and results evaluation, we propose two metrics: a model-to-model and a model-to-log. Finally, we present an automatic approach for the extension of a control-flow model with social information, in order to simplify the analysis of these perspectives. The second part of this thesis deals with control-flow discovery algorithms in on-line settings. We propose a formal definition of the problem, and two baseline approaches. The actual mining algorithms proposed are two: the first is the adaptation, to the control-flow discovery problem, of a frequency counting algorithm; the second constitutes a framework of models which can be used for different kinds of streams (stationary versus evolving).
Resumo:
In food industry, quality assurance requires low cost methods for the rapid assessment of the parameters that affect product stability. Foodstuffs are complex in their structure, mainly composed by gaseous, liquid and solid phases which often coexist in the same product. Special attention is given to water, concerned as natural component of the major food product or as added ingredient of a production process. Particularly water is structurally present in the matrix and not completely available. In this way, water can be present in foodstuff in many different states: as water of crystallization, bound to protein or starch molecules, entrapped in biopolymer networks or adsorbed on solid surfaces of porous food particles. The traditional technique for the assessment of food quality give reliable information but are destructive, time consuming and unsuitable for on line application. The techniques proposed answer to the limited disposition of time and could be able to characterize the main compositional parameters. Dielectric interaction response is mainly related to water and could be useful not only to provide information on the total content but also on the degree of mobility of this ubiquitous molecule in different complex food matrix. In this way the proposal of this thesis is to answer at this need. Dielectric and electric tool can be used for the scope and led us to describe the complex food matrix and predict food characteristic. The thesis is structured in three main part, in the first one some theoretical tools are recalled to well assess the food parameter involved in the quality definition and the techniques able to reply at the problem emerged. The second part explains the research conducted and the experimental plans are illustrated in detail. Finally the last section is left for rapid method easily implementable in an industrial process.
Resumo:
This thesis presents a process-based modelling approach to quantify carbon uptake by lichens and bryophytes at the global scale. Based on the modelled carbon uptake, potential global rates of nitrogen fixation, phosphorus uptake and chemical weathering by the organisms are estimated. In this way, the significance of lichens and bryophytes for global biogeochemical cycles can be assessed. The model uses gridded climate data and key properties of the habitat (e.g. disturbance intervals) to predict processes which control net carbon uptake, namely photosynthesis, respiration, water uptake and evaporation. It relies on equations used in many dynamical vegetation models, which are combined with concepts specific to lichens and bryophytes, such as poikilohydry or the effect of water content on CO2 diffusivity. To incorporate the great functional variation of lichens and bryophytes at the global scale, the model parameters are characterised by broad ranges of possible values instead of a single, globally uniform value. The predicted terrestrial net uptake of 0.34 to 3.3 Gt / yr of carbon and global patterns of productivity are in accordance with empirically-derived estimates. Based on the simulated estimates of net carbon uptake, further impacts of lichens and bryophytes on biogeochemical cycles are quantified at the global scale. Thereby the focus is on three processes, namely nitrogen fixation, phosphorus uptake and chemical weathering. The presented estimates have the form of potential rates, which means that the amount of nitrogen and phosphorus is quantified which is needed by the organisms to build up biomass, also accounting for resorption and leaching of nutrients. Subsequently, the potential phosphorus uptake on bare ground is used to estimate chemical weathering by the organisms, assuming that they release weathering agents to obtain phosphorus. The predicted requirement for nitrogen ranges from 3.5 to 34 Tg / yr and for phosphorus it ranges from 0.46 to 4.6 Tg / yr. Estimates of chemical weathering are between 0.058 and 1.1 km³ / yr of rock. These values seem to have a realistic order of magnitude and they support the notion that lichens and bryophytes have the potential to play an important role for global biogeochemical cycles.
Resumo:
Nowadays the environmental issues and the climatic change play fundamental roles in the design of urban spaces. Our cities are growing in size, many times only following immediate needs without a long-term vision. Consequently, the sustainable development has become not only an ethical but also a strategic need: we can no longer afford an uncontrolled urban expansion. One serious effect of the territory industrialisation process is the increase of urban air and surfaces temperatures compared to the outlying rural surroundings. This difference in temperature is what constitutes an urban heat island (UHI). The purpose of this study is to provide a clarification on the role of urban surfacing materials in the thermal dynamics of an urban space, resulting in useful indications and advices in mitigating UHI. With this aim, 4 coloured concrete bricks were tested, measuring their emissivity and building up their heat release curves using infrared thermography. Two emissivity evaluation procedures were carried out and subsequently put in comparison. Samples performances were assessed, and the influence of the colour on the thermal behaviour was investigated. In addition, some external pavements were analysed. Albedo and emissivity parameters were evaluated in order to understand their thermal behaviour in different conditions. Surfaces temperatures were recorded in a one-day measurements campaign. ENVI-met software was used to simulate how the tested materials would behave in two typical urban scenarios: a urban canyon and a urban heat basin. Improvements they can carry to the urban microclimate were investigated. Emissivities obtained for the bricks ranged between 0.92 and 0.97, suggesting a limited influence of the colour on this parameter. Nonetheless, white concrete brick showed the best thermal performance, whilst the black one the worst; red and yellow ones performed pretty identical intermediate trends. De facto, colours affected the overall thermal behaviour. Emissivity parameter was measured in the outdoor work, getting (as expected) high values for the asphalts. Albedo measurements, conducted with a sunshine pyranometer, proved the improving effect given by the yellow paint in terms of solar reflection, and the bad influence of haze on the measurement accuracy. ENVI-met simulations gave a demonstration on the effectiveness in thermal improving of some tested materials. In particular, results showed good performances for white bricks and granite in the heat basin scenario, and painted concrete and macadam in the urban canyon scenario. These materials can be considered valuable solutions in UHI mitigation.
Resumo:
Recent studies found that soil-atmosphere coupling features, through soil moisture, have been crucial to simulate well heat waves amplitude, duration and intensity. Moreover, it was found that soil moisture depletion both in Winter and Spring anticipates strong heat waves during the Summer. Irrigation in geophysical studies can be intended as an anthropogenic forcing to the soil-moisture, besides changes in land proprieties. In this study, the irrigation was add to a LAM hydrostatic model (BOLAM) and coupled with the soil. The response of the model to irrigation perturbation is analyzed during a dry Summer season. To identify a dry Summer, with overall positive temperature anomalies, an extensive climatological characterization of 2015 was done. The method included a statistical validation on the reference period distribution used to calculate the anomalies. Drought conditions were observed during Summer 2015 and previous seasons, both on the analyzed region and the Alps. Moreover July was characterized as an extreme event for the referred distribution. The numerical simulation consisted on the summer season of 2015 and two run: a control run (CTR), with the soil coupling and a perturbed run (IPR). The perturbation consists on a mask of land use created from the Cropland FAO dataset, where an irrigation water flux of 3 mm/day was applied from 6 A.M. to 9 A.M. every day. The results show that differences between CTR and IPR has a strong daily cycle. The main modifications are on the air masses proprieties, not on to the dynamics. However, changes in the circulation at the boundaries of the Po Valley are observed, and a diagnostic spatial correlation of variable differences shows that soil moisture perturbation explains well the variation observed in the 2 meters height temperature and in the latent heat fluxes.On the other hand, does not explain the spatial shift up and downslope observed during different periods of the day. Given the results, irrigation process affects the atmospheric proprieties on a larger scale than the irrigation, therefore it is important in daily forecast, particularly during hot and dry periods.
Resumo:
Model based calibration has gained popularity in recent years as a method to optimize increasingly complex engine systems. However virtually all model based techniques are applied to steady state calibration. Transient calibration is by and large an emerging technology. An important piece of any transient calibration process is the ability to constrain the optimizer to treat the problem as a dynamic one and not as a quasi-static process. The optimized air-handling parameters corresponding to any instant of time must be achievable in a transient sense; this in turn depends on the trajectory of the same parameters over previous time instances. In this work dynamic constraint models have been proposed to translate commanded to actually achieved air-handling parameters. These models enable the optimization to be realistic in a transient sense. The air handling system has been treated as a linear second order system with PD control. Parameters for this second order system have been extracted from real transient data. The model has been shown to be the best choice relative to a list of appropriate candidates such as neural networks and first order models. The selected second order model was used in conjunction with transient emission models to predict emissions over the FTP cycle. It has been shown that emission predictions based on air-handing parameters predicted by the dynamic constraint model do not differ significantly from corresponding emissions based on measured air-handling parameters.
Resumo:
This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.
Resumo:
This is the second part of a study investigating a model-based transient calibration process for diesel engines. The first part addressed the data requirements and data processing required for empirical transient emission and torque models. The current work focuses on modelling and optimization. The unexpected result of this investigation is that when trained on transient data, simple regression models perform better than more powerful methods such as neural networks or localized regression. This result has been attributed to extrapolation over data that have estimated rather than measured transient air-handling parameters. The challenges of detecting and preventing extrapolation using statistical methods that work well with steady-state data have been explained. The concept of constraining the distribution of statistical leverage relative to the distribution of the starting solution to prevent extrapolation during the optimization process has been proposed and demonstrated. Separate from the issue of extrapolation is preventing the search from being quasi-static. Second-order linear dynamic constraint models have been proposed to prevent the search from returning solutions that are feasible if each point were run at steady state, but which are unrealistic in a transient sense. Dynamic constraint models translate commanded parameters to actually achieved parameters that then feed into the transient emission and torque models. Combined model inaccuracies have been used to adjust the optimized solutions. To frame the optimization problem within reasonable dimensionality, the coefficients of commanded surfaces that approximate engine tables are adjusted during search iterations, each of which involves simulating the entire transient cycle. The resulting strategy, different from the corresponding manual calibration strategy and resulting in lower emissions and efficiency, is intended to improve rather than replace the manual calibration process.
Resumo:
Biodegradable nanoparticles are at the forefront of drug delivery research as they provide numerous advantages over traditional drug delivery methods. An important factor affecting the ability of nanoparticles to circulate within the blood stream and interact with cells is their morphology. In this study a novel processing method, confined impinging jet mixing, was used to form poly (lactic acid) nanoparticles through a solvent-diffusion process with Pluronic F-127 being used as a stabilizing agent. This study focused on the effects of Reynolds number (flow rate), surfactant presence in mixing, and polymer concentration on the morphology of poly (lactic acid) nanoparticles. In addition to looking at the parameters affecting poly (lactic acid) morphology, this study attempted to improve nanoparticle isolation and purification methods to increase nanoparticle yield and ensure specific morphologies were not being excluded during isolation and purification. The isolation and purification methods used in this study were centrifugation and a stir cell. This study successfully produced particles having pyramidal and cubic morphologies. Despite successful production of these morphologies the yield of non-spherical particles was very low, additionally great variability existed between redundant trails. Surfactant was determined to be very important for the stabilization of nanoparticles in solution but appears to be unnecessary for the formation of nanoparticles. Isolation and purification methods that produce a high yield of surfactant free particles have still not been perfected and additional testing will be necessary for improvement.¿
Resumo:
In order to improve the osseointegration of endosseous implants made from titanium, the structure and composition of the surface were modified. Mirror-polished commercially pure (cp) titanium substrates were coated by the sol-gel process with different oxides: TiO(2), SiO(2), Nb(2)O(5) and SiO(2)-TiO(2). The coatings were physically and biologically characterized. Infrared spectroscopy confirmed the absence of organic residues. Ellipsometry determined the thickness of layers to be approximately 100nm. High resolution scanning electron microscopy (SEM) and atomice force microscopy revealed a nanoporous structure in the TiO(2) and Nb(2)O(5) layers, whereas the SiO(2) and SiO(2)-TiO(2) layers appeared almost smooth. The R(a) values, as determined by white-light interferometry, ranged from 20 to 50nm. The surface energy determined by the sessile-drop contact angle method revealed the highest polar component for SiO(2) (30.7mJm(-2)) and the lowest for cp-Ti and 316L stainless steel (6.7mJm(-2)). Cytocompatibility of the oxide layers was investigated with MC3T3-E1 osteoblasts in vitro (proliferation, vitality, morphology and cytochemical/immunolabelling of actin and vinculin). Higher cell proliferation rates were found in SiO(2)-TiO(2) and TiO(2), and lower in Nb(2)O(5) and SiO(2); whereas the vitality rates increased for cp-Ti and Nb(2)O(5). Cytochemical assays showed that all substrates induced a normal cytoskeleton and well-developed focal adhesion contacts. SEM revealed good cell attachment for all coating layers. In conclusion, the sol-gel-derived oxide layers were thin, pure and nanostructured; consequent different osteoblast responses to those coatings are explained by the mutual action and coadjustment of different interrelated surface parameters.