10 resultados para variable rate application
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
In collaboration with G.D. SpA I attended an internship with the purpose of developing a filter for the position control of industrial machines during testing and maintenance operations. The filter elaborates a signal in position provided by an electonic handwheel, in order to enable the application to be controlled with a signal in velocity with arbitrarily dynamics chosen during the design phase. Limiting the dynamics of the filter provide a more stable and less demanding reference trajectory which reduce the vibrations and tracking errors of the motor controlled by it. It also prevents misusages of the handwheel from the technician which could end up in harmful interferences between the mechanical parts moved by the handwheel.
Resumo:
Introduction 1.1 Occurrence of polycyclic aromatic hydrocarbons (PAH) in the environment Worldwide industrial and agricultural developments have released a large number of natural and synthetic hazardous compounds into the environment due to careless waste disposal, illegal waste dumping and accidental spills. As a result, there are numerous sites in the world that require cleanup of soils and groundwater. Polycyclic aromatic hydrocarbons (PAHs) are one of the major groups of these contaminants (Da Silva et al., 2003). PAHs constitute a diverse class of organic compounds consisting of two or more aromatic rings with various structural configurations (Prabhu and Phale, 2003). Being a derivative of benzene, PAHs are thermodynamically stable. In addition, these chemicals tend to adhere to particle surfaces, such as soils, because of their low water solubility and strong hydrophobicity, and this results in greater persistence under natural conditions. This persistence coupled with their potential carcinogenicity makes PAHs problematic environmental contaminants (Cerniglia, 1992; Sutherland, 1992). PAHs are widely found in high concentrations at many industrial sites, particularly those associated with petroleum, gas production and wood preserving industries (Wilson and Jones, 1993). 1.2 Remediation technologies Conventional techniques used for the remediation of soil polluted with organic contaminants include excavation of the contaminated soil and disposal to a landfill or capping - containment - of the contaminated areas of a site. These methods have some drawbacks. The first method simply moves the contamination elsewhere and may create significant risks in the excavation, handling and transport of hazardous material. Additionally, it is very difficult and increasingly expensive to find new landfill sites for the final disposal of the material. The cap and containment method is only an interim solution since the contamination remains on site, requiring monitoring and maintenance of the isolation barriers long into the future, with all the associated costs and potential liability. A better approach than these traditional methods is to completely destroy the pollutants, if possible, or transform them into harmless substances. Some technologies that have been used are high-temperature incineration and various types of chemical decomposition (for example, base-catalyzed dechlorination, UV oxidation). However, these methods have significant disadvantages, principally their technological complexity, high cost , and the lack of public acceptance. Bioremediation, on the contrast, is a promising option for the complete removal and destruction of contaminants. 1.3 Bioremediation of PAH contaminated soil & groundwater Bioremediation is the use of living organisms, primarily microorganisms, to degrade or detoxify hazardous wastes into harmless substances such as carbon dioxide, water and cell biomass Most PAHs are biodegradable unter natural conditions (Da Silva et al., 2003; Meysami and Baheri, 2003) and bioremediation for cleanup of PAH wastes has been extensively studied at both laboratory and commercial levels- It has been implemented at a number of contaminated sites, including the cleanup of the Exxon Valdez oil spill in Prince William Sound, Alaska in 1989, the Mega Borg spill off the Texas coast in 1990 and the Burgan Oil Field, Kuwait in 1994 (Purwaningsih, 2002). Different strategies for PAH bioremediation, such as in situ , ex situ or on site bioremediation were developed in recent years. In situ bioremediation is a technique that is applied to soil and groundwater at the site without removing the contaminated soil or groundwater, based on the provision of optimum conditions for microbiological contaminant breakdown.. Ex situ bioremediation of PAHs, on the other hand, is a technique applied to soil and groundwater which has been removed from the site via excavation (soil) or pumping (water). Hazardous contaminants are converted in controlled bioreactors into harmless compounds in an efficient manner. 1.4 Bioavailability of PAH in the subsurface Frequently, PAH contamination in the environment is occurs as contaminants that are sorbed onto soilparticles rather than in phase (NAPL, non aqueous phase liquids). It is known that the biodegradation rate of most PAHs sorbed onto soil is far lower than rates measured in solution cultures of microorganisms with pure solid pollutants (Alexander and Scow, 1989; Hamaker, 1972). It is generally believed that only that fraction of PAHs dissolved in the solution can be metabolized by microorganisms in soil. The amount of contaminant that can be readily taken up and degraded by microorganisms is defined as bioavailability (Bosma et al., 1997; Maier, 2000). Two phenomena have been suggested to cause the low bioavailability of PAHs in soil (Danielsson, 2000). The first one is strong adsorption of the contaminants to the soil constituents which then leads to very slow release rates of contaminants to the aqueous phase. Sorption is often well correlated with soil organic matter content (Means, 1980) and significantly reduces biodegradation (Manilal and Alexander, 1991). The second phenomenon is slow mass transfer of pollutants, such as pore diffusion in the soil aggregates or diffusion in the organic matter in the soil. The complex set of these physical, chemical and biological processes is schematically illustrated in Figure 1. As shown in Figure 1, biodegradation processes are taking place in the soil solution while diffusion processes occur in the narrow pores in and between soil aggregates (Danielsson, 2000). Seemingly contradictory studies can be found in the literature that indicate the rate and final extent of metabolism may be either lower or higher for sorbed PAHs by soil than those for pure PAHs (Van Loosdrecht et al., 1990). These contrasting results demonstrate that the bioavailability of organic contaminants sorbed onto soil is far from being well understood. Besides bioavailability, there are several other factors influencing the rate and extent of biodegradation of PAHs in soil including microbial population characteristics, physical and chemical properties of PAHs and environmental factors (temperature, moisture, pH, degree of contamination). Figure 1: Schematic diagram showing possible rate-limiting processes during bioremediation of hydrophobic organic contaminants in a contaminated soil-water system (not to scale) (Danielsson, 2000). 1.5 Increasing the bioavailability of PAH in soil Attempts to improve the biodegradation of PAHs in soil by increasing their bioavailability include the use of surfactants , solvents or solubility enhancers.. However, introduction of synthetic surfactant may result in the addition of one more pollutant. (Wang and Brusseau, 1993).A study conducted by Mulder et al. showed that the introduction of hydropropyl-ß-cyclodextrin (HPCD), a well-known PAH solubility enhancer, significantly increased the solubilization of PAHs although it did not improve the biodegradation rate of PAHs (Mulder et al., 1998), indicating that further research is required in order to develop a feasible and efficient remediation method. Enhancing the extent of PAHs mass transfer from the soil phase to the liquid might prove an efficient and environmentally low-risk alternative way of addressing the problem of slow PAH biodegradation in soil.
Resumo:
Il lavoro presentato in questa tesi si colloca nel contesto della programmazione con vincoli, un paradigma per modellare e risolvere problemi di ricerca combinatoria che richiedono di trovare soluzioni in presenza di vincoli. Una vasta parte di questi problemi trova naturale formulazione attraverso il linguaggio delle variabili insiemistiche. Dal momento che il dominio di tali variabili può essere esponenziale nel numero di elementi, una rappresentazione esplicita è spesso non praticabile. Recenti studi si sono quindi focalizzati nel trovare modi efficienti per rappresentare tali variabili. Pertanto si è soliti rappresentare questi domini mediante l'uso di approssimazioni definite tramite intervalli (d'ora in poi rappresentazioni), specificati da un limite inferiore e un limite superiore secondo un'appropriata relazione d'ordine. La recente evoluzione della ricerca sulla programmazione con vincoli sugli insiemi ha chiaramente indicato che la combinazione di diverse rappresentazioni permette di raggiungere prestazioni di ordini di grandezza superiori rispetto alle tradizionali tecniche di codifica. Numerose proposte sono state fatte volgendosi in questa direzione. Questi lavori si differenziano su come è mantenuta la coerenza tra le diverse rappresentazioni e su come i vincoli vengono propagati al fine di ridurre lo spazio di ricerca. Sfortunatamente non esiste alcun strumento formale per paragonare queste combinazioni. Il principale obiettivo di questo lavoro è quello di fornire tale strumento, nel quale definiamo precisamente la nozione di combinazione di rappresentazioni facendo emergere gli aspetti comuni che hanno caratterizzato i lavori precedenti. In particolare identifichiamo due tipi possibili di combinazioni, una forte ed una debole, definendo le nozioni di coerenza agli estremi sui vincoli e sincronizzazione tra rappresentazioni. Il nostro studio propone alcune interessanti intuizioni sulle combinazioni esistenti, evidenziandone i limiti e svelando alcune sorprese. Inoltre forniamo un'analisi di complessità della sincronizzazione tra minlex, una rappresentazione in grado di propagare in maniera ottimale vincoli lessicografici, e le principali rappresentazioni esistenti.
Resumo:
Survival during the early life stages of marine species, including nearshore temperate reef fishes, is typically very low, and small changes in mortality rates, due to physiological and environmental conditions, can have marked effects on survival of a cohort and, on a larger scale, on the success of a recruitment season. Moreover, trade offs between larval growth and accumulation of energetic resources prior to settlement are likely to influence growth and survival until this critical period and afterwards. Rockfish recruitment rates are notoriously variable between years and across geographic locations. Monitoring of rates of onshore delivery of pelagic juveniles (defined here as settlement) of two species of nearshore rockfishes, Sebastes caurinus and Sebastes carnatus, was done between 2003-2009 years using artificial collectors placed at San Miguel and Santa Cruz Island, off Southern California coast. I investigated spatiotemporal variation in settlement rate, lipid content, pelagic larval duration and larval growth of the newly settled fishes; I assessed relationships between birth date, larval growth, early life-history characteristics and lipid content at settlement, considering also interspecific differences; finally, I attempt to relate interannual patterns of settlement and of early life history traits to easily accessible, local and regional indices of ocean conditions including in situ ocean temperature and regional upwelling, sea surface temperature (SST) and Chlorophyll-a (Chl-a) concentration. Spatial variations appeared to be of low relevance, while significant interannual differences were detected in settlement rate, pelagic larval duration and larval growth. The amount of lipid content of the newly settled fishes was highly variable in space and time, but did not differ between the two species and did not show any relationships with early life history traits, indicating that no trade off involved these physiological processes or they were masked by high individual variability in different periods of larval life. Significant interspecific differences were found in the timing of parturition and settlement and in larval growth rates, with S. carnatus growing faster and breeding and settling later than S. caurinus. The two species exhibited also different patterns of correlations between larval growth rates and larval duration. S. carnatus larval duration was longer when the growth in the first two weeks post-hatch was faster, while S. caurinus had a shorter larval duration when grew fast in the middle and in the end of larval life, suggesting different larval strategies. Fishes with longer larval durations were longer in size at settlement and exhibited longer planktonic phase in periods of favourable environmental conditions. Ocean conditions had a low explanatory power for interannual variation in early life history traits, but a very high explanatory power for settlement fluctuations, with regional upwelling strength being the principal indicator. Nonetheless, interannual variability in larval duration and growth were related to great phenological changes in upwelling happened during the period of this study and that caused negative consequences at all trophic levels along the California coast. Despite the low explanatory power of the environmental variables used in this study on the variation of larval biological traits, environmental processes were differently related with early life history characteristics analyzed to species, indicating possible species-specific susceptibility to ocean conditions and local environmental adaptation, which should be further investigated. These results have implications for understanding the processes influencing larval and juvenile survival, and consequently recruitment variability, which may be dependent on biological characteristics and environmental conditions.
Resumo:
Sudden cardiac death due to ventricular arrhythmia is one of the leading causes of mortality in the world. In the last decades, it has proven that anti-arrhythmic drugs, which prolong the refractory period by means of prolongation of the cardiac action potential duration (APD), play a good role in preventing of relevant human arrhythmias. However, it has long been observed that the “class III antiarrhythmic effect” diminish at faster heart rates and that this phenomenon represent a big weakness, since it is the precise situation when arrhythmias are most prone to occur. It is well known that mathematical modeling is a useful tool for investigating cardiac cell behavior. In the last 60 years, a multitude of cardiac models has been created; from the pioneering work of Hodgkin and Huxley (1952), who first described the ionic currents of the squid giant axon quantitatively, mathematical modeling has made great strides. The O’Hara model, that I employed in this research work, is one of the modern computational models of ventricular myocyte, a new generation began in 1991 with ventricular cell model by Noble et al. Successful of these models is that you can generate novel predictions, suggest experiments and provide a quantitative understanding of underlying mechanism. Obviously, the drawback is that they remain simple models, they don’t represent the real system. The overall goal of this research is to give an additional tool, through mathematical modeling, to understand the behavior of the main ionic currents involved during the action potential (AP), especially underlining the differences between slower and faster heart rates. In particular to evaluate the rate-dependence role on the action potential duration, to implement a new method for interpreting ionic currents behavior after a perturbation effect and to verify the validity of the work proposed by Antonio Zaza using an injected current as a perturbing effect.
Resumo:
Due to its practical importance and inherent complexity, the optimisation of distribution networks for supplying drinking water has been the subject of extensive study for the past 30 years. The optimization is governed by sizing the pipes in the water distribution network (WDN) and / or optimises specific parts of the network such as pumps, tanks etc. or try to analyse and optimise the reliability of a WDN. In this thesis, the author has analysed two different WDNs (Anytown City and Cabrera city networks), trying to solve and optimise a multi-objective optimisation problem (MOOP). The main two objectives in both cases were the minimisation of Energy Cost (€) or Energy consumption (kWh), along with the total Number of pump switches (TNps) during a day. For this purpose, a decision support system generator for Multi-objective optimisation used. Its name is GANetXL and has been developed by the Center of Water System in the University of Exeter. GANetXL, works by calling the EPANET hydraulic solver, each time a hydraulic analysis has been fulfilled. The main algorithm used, was a second-generation algorithm for multi-objective optimisation called NSGA_II that gave us the Pareto fronts of each configuration. The first experiment that has been carried out was the network of Anytown city. It is a big network with a pump station of four fixed speed parallel pumps that are boosting the water dynamics. The main intervention was to change these pumps to new Variable speed driven pumps (VSDPs), by installing inverters capable to diverse their velocity during the day. Hence, it’s been achieved great Energy and cost savings along with minimisation in the number of pump switches. The results of the research are thoroughly illustrated in chapter 7, with comments and a variety of graphs and different configurations. The second experiment was about the network of Cabrera city. The smaller WDN had a unique FS pump in the system. The problem was the same as far as the optimisation process was concerned, thus, the minimisation of the energy consumption and in parallel the minimisation of TNps. The same optimisation tool has been used (GANetXL).The main scope was to carry out several and different experiments regarding a vast variety of configurations, using different pump (but this time keeping the FS mode), different tank levels, different pipe diameters and different emitters coefficient. All these different modes came up with a large number of results that were compared in the chapter 8. Concluding, it should be said that the optimisation of WDNs is a very interested field that has a vast space of options to deal with. This includes a large number of algorithms to choose from, different techniques and configurations to be made and different support system generators. The researcher has to be ready to “roam” between these choices, till a satisfactory result will convince him/her that has reached a good optimisation point.
Resumo:
The work of this thesis is on the implementation of a variable stiffness joint antagonistically actuated by a couple of twisted-string actuator (TSA). This type of joint is possible to be applied in the field of robotics, like UB Hand IV (the anthropomorphic robotic hand developed by University of Bologna). The purposes of the activities are to build the joint dynamic model and simultaneously control the position and stiffness. Three different control approaches (Feedback linearization, PID, PID+Feedforward) are proposed and validated in simulation. To improve the properties of joint stiffness, a joint with elastic element is taken into account and discussed. To the end, the experimental setup that has been developed for the experimental validation of the proposed control approaches.
Resumo:
La gestione del traffico è una delle principali problematiche delle città moderne, e porta alla definizione di nuove sfide per quanto riguarda l’ottimizzazione del flusso veicolare. Il controllo semaforico è uno degli elementi fondamentali per ottimizzare la gestione del traffico. Attualmente la rilevazione del traffico viene effettuata tramite sensori, tra i quali vengono maggiormente utilizzate le spire magnetiche, la cui installazione e gestione implica costi elevati. In questo contesto, il progetto europeo COLOMBO si pone come obiettivo l’ideazione di nuovi sistemi di regolazione semaforica in grado di rilevare il traffico veicolare mediante sensori più economici da installare e mantenere, e capaci, sulla base di tali rilevazioni, di auto organizzarsi, traendo ispirazione dal campo dell’intelligenza artificiale noto come swarm intelligence. Alla base di questa auto organizzazione semaforica di COLOMBO vi sono due diversi livelli di politiche: macroscopico e microscopico. Nel primo caso le politiche macroscopiche, utilizzando il feromone come astrazione dell’attuale livello del traffico, scelgono la politica di gestione in base alla quantità di feromone presente nelle corsie di entrata e di uscita. Per quanto riguarda invece le politiche microscopiche, il loro compito è quello di deci- dere la durata dei periodi di rosso o verde modificando una sequenza di fasi, chiamata in COLOMBO catena. Le catene possono essere scelte dal sistema in base al valore corrente della soglia di desiderabilità e ad ogni catena corrisponde una soglia di desiderabilità. Lo scopo di questo elaborato è quello di suggerire metodi alternativi all’attuale conteggio di questa soglia di desiderabilità in scenari di bassa presenza di dispositivi per la rilevazione dei veicoli. Ogni algoritmo complesso ha bisogno di essere ottimizzato per migliorarne le performance. Anche in questo caso, gli algoritmi proposti hanno subito un processo di parameter tuning per ottimizzarne le prestazioni in scenari di bassa presenza di dispositivi per la rilevazione dei veicoli. Sulla base del lavoro di parameter tuning, infine, sono state eseguite delle simulazioni per valutare quale degli approcci suggeriti sia il migliore.
Resumo:
Il lavoro svolto in questa tesi consiste nell'implementare un'applicazione Android per lo streaming video, conforme allo standard MPEG-DASH. L'obiettivo è quello di fornire un valido strumento al fine di eseguire delle analisi sperimentali su algoritmi particolari, detti di rate adaptation. MPEG-Dynamic Adaptive Streaming over Http è uno standard emergente ed è considerato da molti il futuro dello streaming multimediale. Questa tecnologia consente di auto-regolare la qualità del video in base alle condizioni della rete, la capacità del dispositivo o le preferenze dell'utente. Inoltre, essendo uno standard, permette di rendere interoperabili i server e i device dei vari fornitori di contenuti multimediali. Nei primi capitoli introduttivi verrà presentato lo standard e i lavori correlati, successivamente verrà descritta la mia proposta applicativa: DashPlayer. In conclusione, verrà compiuta una valutazione sperimentali sugli algoritmi sopracitati che costituiscono la parte logico-funzionale dell'applicazione.
Resumo:
Isochrysis galbana is a widely-used strain in aquaculture in spite of its low productivity. To maximize the productivity of processes based on this microalgae strain, a model was developed considering the influence of irradiance, temperature, pH and dissolved oxygen concentration on the photosynthesis and respiration rate. Results demonstrate that this strain tolerates temperatures up to 35ºC but it is highly sensitive to irradiances higher than 500 µE·m-2·s-1 and dissolved oxygen concentrations higher than 11 mg·l-1. With the researcher group of the “Universidad de Almeria”, the developed model was validated using data from an industrial-scale outdoor tubular photobioreactor demonstrating that inadequate temperature and dissolved oxygen concentrations reduce productivity to half that which is maximal, according to light availability under real outdoor conditions. The developed model is a useful tool for managing working processes, especially in the development of new processes based on this strain and to take decisions regarding optimal control strategies. Also the outdoor production of Isochrysis galbana T-iso in industrial size tubular photobioreactors (3.0 m3) has been studied. Experiments were performed modifying the dilution rate and evaluating the biomass productivity and quality, in addition to the overall performance of the system. Results confirmed that T-iso can be produced outdoor at commercial scale in continuous mode, productivities up to 20 g·m-2·day-1 of biomass rich in proteins (45%) and lipids (25%) being obtained. The utilization of this type of photobioreactors allows controlling the contamination and pH of the cultures, but daily variation of solar radiation imposes the existence of inadequate dissolved oxygen concentration and temperature at which the cells are exposed to inside the reactor. Excessive dissolved oxygen reduced the biomass productivity to 68% of maximal, whereas inadequate temperature reduces to 63% of maximal. Thus, optimally controlling these parameters the biomass productivity can be duplicated. These results confirm the potential to produce this valuable strain at commercial scale in optimally designed/operated tubular photobioreactors as a biotechnological industry.