809 resultados para 887
Resumo:
Understanding the interaction of sea ice with offshore structures is of primary importance for the development of technology in cold climate regions. The rheological properties of sea ice (strength, creep, viscosity) as well as the roughness of the contact surface are the main factors influencing the type of interaction with a structure. A device was developed and designed and small scale laboratory experiments were carried out to study sea ice frictional interaction with steel material by means of a uniaxial compression rig. Sea-ice was artificially grown between a stainless steel piston (of circular cross section) and a hollow cylinder of the same material, coaxial to the former and of the same surface roughness. Three different values for the roughness were tested: 1.2, 10 and 30 μm Ry (maximum asperities height), chosen as representative values for typical surface conditions, from smooth to normally corroded steel. Creep tests (0.2, 0.3, 0.4 and 0.6 kN) were conducted at T = -10 ºC. By pushing the piston head towards the cylinder base, three different types of relative movement were observed: 1) the piston slid through the ice, 2) the piston slid through the ice and the ice slid on the surface of the outer cylinder, 3) the ice slid only on the cylinder surface. A cyclic stick-slip motion of the piston was detected with a representative frequency of 0.1 Hz. The ratio of the mean rate of axial displacement to the frequency of the stick-slip oscillations was found to be comparable to the roughness length (Sm). The roughness is the most influential parameter affecting the amplitude of the oscillations, while the load has a relevant influence on the their frequency. Guidelines for further investigations were recommended. Marco Nanetti - seloselo@virgilio.it
Resumo:
Da ormai sette anni la stazione permanente GPS di Baia Terranova acquisisce dati giornalieri che opportunamente elaborati consentono di contribuire alla comprensione della dinamica antartica e a verificare se modelli globali di natura geofisica siano aderenti all’area di interesse della stazione GPS permanente. Da ricerche bibliografiche condotte si è dedotto che una serie GPS presenta molteplici possibili perturbazioni principalmente dovute a errori nella modellizzazione di alcuni dati ancillari necessari al processamento. Non solo, da alcune analisi svolte, è emerso come tali serie temporali ricavate da rilievi geodetici, siano afflitte da differenti tipologie di rumore che possono alterare, se non opportunamente considerate, i parametri di interesse per le interpretazioni geofisiche del dato. Il lavoro di tesi consiste nel comprendere in che misura tali errori, possano incidere sui parametri dinamici che caratterizzano il moto della stazione permanente, facendo particolare riferimento alla velocità del punto sul quale la stazione è installata e sugli eventuali segnali periodici che possono essere individuati.
Resumo:
Introduction 1.1 Occurrence of polycyclic aromatic hydrocarbons (PAH) in the environment Worldwide industrial and agricultural developments have released a large number of natural and synthetic hazardous compounds into the environment due to careless waste disposal, illegal waste dumping and accidental spills. As a result, there are numerous sites in the world that require cleanup of soils and groundwater. Polycyclic aromatic hydrocarbons (PAHs) are one of the major groups of these contaminants (Da Silva et al., 2003). PAHs constitute a diverse class of organic compounds consisting of two or more aromatic rings with various structural configurations (Prabhu and Phale, 2003). Being a derivative of benzene, PAHs are thermodynamically stable. In addition, these chemicals tend to adhere to particle surfaces, such as soils, because of their low water solubility and strong hydrophobicity, and this results in greater persistence under natural conditions. This persistence coupled with their potential carcinogenicity makes PAHs problematic environmental contaminants (Cerniglia, 1992; Sutherland, 1992). PAHs are widely found in high concentrations at many industrial sites, particularly those associated with petroleum, gas production and wood preserving industries (Wilson and Jones, 1993). 1.2 Remediation technologies Conventional techniques used for the remediation of soil polluted with organic contaminants include excavation of the contaminated soil and disposal to a landfill or capping - containment - of the contaminated areas of a site. These methods have some drawbacks. The first method simply moves the contamination elsewhere and may create significant risks in the excavation, handling and transport of hazardous material. Additionally, it is very difficult and increasingly expensive to find new landfill sites for the final disposal of the material. The cap and containment method is only an interim solution since the contamination remains on site, requiring monitoring and maintenance of the isolation barriers long into the future, with all the associated costs and potential liability. A better approach than these traditional methods is to completely destroy the pollutants, if possible, or transform them into harmless substances. Some technologies that have been used are high-temperature incineration and various types of chemical decomposition (for example, base-catalyzed dechlorination, UV oxidation). However, these methods have significant disadvantages, principally their technological complexity, high cost , and the lack of public acceptance. Bioremediation, on the contrast, is a promising option for the complete removal and destruction of contaminants. 1.3 Bioremediation of PAH contaminated soil & groundwater Bioremediation is the use of living organisms, primarily microorganisms, to degrade or detoxify hazardous wastes into harmless substances such as carbon dioxide, water and cell biomass Most PAHs are biodegradable unter natural conditions (Da Silva et al., 2003; Meysami and Baheri, 2003) and bioremediation for cleanup of PAH wastes has been extensively studied at both laboratory and commercial levels- It has been implemented at a number of contaminated sites, including the cleanup of the Exxon Valdez oil spill in Prince William Sound, Alaska in 1989, the Mega Borg spill off the Texas coast in 1990 and the Burgan Oil Field, Kuwait in 1994 (Purwaningsih, 2002). Different strategies for PAH bioremediation, such as in situ , ex situ or on site bioremediation were developed in recent years. In situ bioremediation is a technique that is applied to soil and groundwater at the site without removing the contaminated soil or groundwater, based on the provision of optimum conditions for microbiological contaminant breakdown.. Ex situ bioremediation of PAHs, on the other hand, is a technique applied to soil and groundwater which has been removed from the site via excavation (soil) or pumping (water). Hazardous contaminants are converted in controlled bioreactors into harmless compounds in an efficient manner. 1.4 Bioavailability of PAH in the subsurface Frequently, PAH contamination in the environment is occurs as contaminants that are sorbed onto soilparticles rather than in phase (NAPL, non aqueous phase liquids). It is known that the biodegradation rate of most PAHs sorbed onto soil is far lower than rates measured in solution cultures of microorganisms with pure solid pollutants (Alexander and Scow, 1989; Hamaker, 1972). It is generally believed that only that fraction of PAHs dissolved in the solution can be metabolized by microorganisms in soil. The amount of contaminant that can be readily taken up and degraded by microorganisms is defined as bioavailability (Bosma et al., 1997; Maier, 2000). Two phenomena have been suggested to cause the low bioavailability of PAHs in soil (Danielsson, 2000). The first one is strong adsorption of the contaminants to the soil constituents which then leads to very slow release rates of contaminants to the aqueous phase. Sorption is often well correlated with soil organic matter content (Means, 1980) and significantly reduces biodegradation (Manilal and Alexander, 1991). The second phenomenon is slow mass transfer of pollutants, such as pore diffusion in the soil aggregates or diffusion in the organic matter in the soil. The complex set of these physical, chemical and biological processes is schematically illustrated in Figure 1. As shown in Figure 1, biodegradation processes are taking place in the soil solution while diffusion processes occur in the narrow pores in and between soil aggregates (Danielsson, 2000). Seemingly contradictory studies can be found in the literature that indicate the rate and final extent of metabolism may be either lower or higher for sorbed PAHs by soil than those for pure PAHs (Van Loosdrecht et al., 1990). These contrasting results demonstrate that the bioavailability of organic contaminants sorbed onto soil is far from being well understood. Besides bioavailability, there are several other factors influencing the rate and extent of biodegradation of PAHs in soil including microbial population characteristics, physical and chemical properties of PAHs and environmental factors (temperature, moisture, pH, degree of contamination). Figure 1: Schematic diagram showing possible rate-limiting processes during bioremediation of hydrophobic organic contaminants in a contaminated soil-water system (not to scale) (Danielsson, 2000). 1.5 Increasing the bioavailability of PAH in soil Attempts to improve the biodegradation of PAHs in soil by increasing their bioavailability include the use of surfactants , solvents or solubility enhancers.. However, introduction of synthetic surfactant may result in the addition of one more pollutant. (Wang and Brusseau, 1993).A study conducted by Mulder et al. showed that the introduction of hydropropyl-ß-cyclodextrin (HPCD), a well-known PAH solubility enhancer, significantly increased the solubilization of PAHs although it did not improve the biodegradation rate of PAHs (Mulder et al., 1998), indicating that further research is required in order to develop a feasible and efficient remediation method. Enhancing the extent of PAHs mass transfer from the soil phase to the liquid might prove an efficient and environmentally low-risk alternative way of addressing the problem of slow PAH biodegradation in soil.
Resumo:
The Peer-to-Peer network paradigm is drawing the attention of both final users and researchers for its features. P2P networks shift from the classic client-server approach to a high level of decentralization where there is no central control and all the nodes should be able not only to require services, but to provide them to other peers as well. While on one hand such high level of decentralization might lead to interesting properties like scalability and fault tolerance, on the other hand it implies many new problems to deal with. A key feature of many P2P systems is openness, meaning that everybody is potentially able to join a network with no need for subscription or payment systems. The combination of openness and lack of central control makes it feasible for a user to free-ride, that is to increase its own benefit by using services without allocating resources to satisfy other peers’ requests. One of the main goals when designing a P2P system is therefore to achieve cooperation between users. Given the nature of P2P systems based on simple local interactions of many peers having partial knowledge of the whole system, an interesting way to achieve desired properties on a system scale might consist in obtaining them as emergent properties of the many interactions occurring at local node level. Two methods are typically used to face the problem of cooperation in P2P networks: 1) engineering emergent properties when designing the protocol; 2) study the system as a game and apply Game Theory techniques, especially to find Nash Equilibria in the game and to reach them making the system stable against possible deviant behaviors. In this work we present an evolutionary framework to enforce cooperative behaviour in P2P networks that is alternative to both the methods mentioned above. Our approach is based on an evolutionary algorithm inspired by computational sociology and evolutionary game theory, consisting in having each peer periodically trying to copy another peer which is performing better. The proposed algorithms, called SLAC and SLACER, draw inspiration from tag systems originated in computational sociology, the main idea behind the algorithm consists in having low performance nodes copying high performance ones. The algorithm is run locally by every node and leads to an evolution of the network both from the topology and from the nodes’ strategy point of view. Initial tests with a simple Prisoners’ Dilemma application show how SLAC is able to bring the network to a state of high cooperation independently from the initial network conditions. Interesting results are obtained when studying the effect of cheating nodes on SLAC algorithm. In fact in some cases selfish nodes rationally exploiting the system for their own benefit can actually improve system performance from the cooperation formation point of view. The final step is to apply our results to more realistic scenarios. We put our efforts in studying and improving the BitTorrent protocol. BitTorrent was chosen not only for its popularity but because it has many points in common with SLAC and SLACER algorithms, ranging from the game theoretical inspiration (tit-for-tat-like mechanism) to the swarms topology. We discovered fairness, meant as ratio between uploaded and downloaded data, to be a weakness of the original BitTorrent protocol and we drew inspiration from the knowledge of cooperation formation and maintenance mechanism derived from the development and analysis of SLAC and SLACER, to improve fairness and tackle freeriding and cheating in BitTorrent. We produced an extension of BitTorrent called BitFair that has been evaluated through simulation and has shown the abilities of enforcing fairness and tackling free-riding and cheating nodes.
Resumo:
ABSTRACT In passato valeva il motto “grande è meglio”. Le aziende avevano enormi magazzini che riempivano fino a che questi riuscivano a contenere merce. Ciò avveniva nella convinzione che gli alti costi della gestione delle scorte sarebbero stati superati dalle vendite. Negli ultimi anni le aziende hanno però radicalmente cambiato il loro approccio alla gestione delle rimanenze: queste, vengono considerate un eccesso e i magazzini con elevate scorte vengono visti come un‟opportunità mancata e capitale sprecato. I produttori sono sempre più consapevoli della necessità di minimizzare l‟inventario e massimizzare la rotazione dei prodotti. In questo contesto si inseriscono le problematiche connesse alla gestione delle scorte, con particolare riferimento al fenomeno dello slow moving, espressione che indica tutti i materiali a lenta movimentazione stoccati a magazzino. Questi spesso rappresentano un importante onere finanziario per le aziende: si sostengono costi di immobilizzo del capitale, di stoccaggio del materiale oltre a quelli connessi al rischio di obsolescenza. Molto spesso si tende a trascurare il problema, fino a quando le dimensioni di quest‟ultimo si ingrandiscono al punto di diventare una priorità aziendale da cui si cerca una rapida soluzione. L‟elaborato presenta un preliminare studio di caso che ha costituito il punto di partenza per la progettazione di un action plan per la riduzione e lo smaltimento degli item obsoleti e a bassa movimentazione di una realtà aziendale italiana del settore metalmeccanico di cui si presentano anche i primi risultati dell‟implementazione. Dopo aver fornito una panoramica teorica sul ruolo e la gestione delle scorte, si passa ad analizzare il problema dello slow moving in una particolare realtà aziendale, la Biffi s.r.l. di Fiorenzuola d‟Arda (Piacenza), un‟ impresa che dal 1955 produce attuatori per valvole. Esaminato il mercato di riferimento, il tipo di prodotto e il processo di fabbricazione, si passa all‟as-is del magazzino. Attraverso tale analisi, vengono poste in risalto le problematiche legate all‟identificazione, classificazione e gestione dei codici slow moving. E‟ stato proprio lo studio di questi dati che ha posto le basi per lo sviluppo dell‟action plan per la razionalizzazione delle giacenze a magazzino. Nell‟ultima parte dell‟elaborato ho quindi descritto le azioni intraprese e presentato i risultati ottenuti, che hanno consentito risparmi significativi senza effettuare investimenti , ma semplicemente utilizzando al meglio le risorse già disponibili.
Resumo:
Diese Arbeit versteht sich als Beitrag zur Modellierung von Parallelrechnern. Ein solcher Parallelrechner kann als makroskopisches physikalisches dynamisches System mit einer sehr großen Anzahl von Freiheitsgraden, diskretem Zustandsraum und diskreter Zeit aufgefasst werden. Derartige Systeme werden von der Nichtlinearen Dynamik behandelt. Jede modellmäßige Behandlung eines Systems mit derart differenzierten Wechselwirkungen muss sich auf bestimmte, dem Ziel und Zweck der Untersuchung angepasste Aspekte beschränken. Dabei müssen sowohl allgemeine Vorstellungen als auch konkretes Wissen in ein mathematisch behandelbares Modell umgesetzt werden. Die in dieser Arbeit vorgestellten Beiträge zur Modellierung von Parallelrechnern dienen mehreren Zielen. Zum einen wird ein Modell kritisch untersucht und weiterentwickelt, das dazu dienen soll, die Ausführungszeit eines konkreten parallelen Programmes auf einem konkreten Parallelrechner brauchbar vorherzusagen. Zum anderen soll die Untersuchung eines konkreten Problems aus dem Bereich von Computerwissenschaft und -technik dazu genutzt werden, ein tieferes Verständnis für das zu modellierende System zu entwickeln und daraus neue Aspekte für die Modellierung dynamischer Systeme im Allgemeinen zu gewinnen. In dieser Arbeit wird gezeigt, dass es bei der Modellierung von Parallelrechnern notwendig ist, viele technische Konstruktionseigenschaften in das Modell zu integrieren. Diese aber folgen der sehr raschen Entwicklung der Technik auf diesem Gebiet. Damit Formulierung, Test und Validierung des Modells mit der Entwicklung des Objektbereiches Schritt halten können, müssen in Zukunft neue Modellierungsverfahren entwickelt und angewendet werden, die bei Änderungen im Objektbereich eine rasche Anpassung ermöglichen. Diese Untersuchung entspricht einem interdisziplinären Ansatz, in dem einerseits Inhalte der Computerwissenschaften und andererseits Grundmethoden der experimentellen Physik verwendet werden. Dazu wurden die Vorhersagen der abstrakten Modelle mit den experimentell gewonnen Messergebnissen von realen Systemen verglichen. Auf dieser Basis wird gezeigt, dass der hierarchische Aufbau des Speichers Einflüsse von mehreren Größenordnungen auf die Ausführungsgeschwindigkeit einer Anwendung ausüben kann. Das im Rahmen der vorliegenden Arbeit entwickelte Modell der einzelnen Rechenknoten eines Parallelrechners gibt diese Effekte innerhalb eines relativen Vorhersagefehlers von nur wenigen Prozent korrekt wieder.
Resumo:
Die vorliegende Arbeit behandelt die Adsorption von Phospholipidvesikeln, Proteinen und Latex Beads an funktionalisierten festkörpergestützten Oberflächen. Der Schwerpunkt der Abhandlung liegt in der Modellierung der Kinetik mit Hilfe eines Simulationsprogramms unter Berücksichtigung des Massentransports in Staupunktgeometrie, der Adsorption und nachfolgender konformativer Änderungen adhärierter Partikel. Das Simulationsprogramm basiert auf einem RSA (random sequential adsorption) Algorithmus und erlaubt es aufgrund zusätzlich implementierter Optionen, z.B. zur Behandlung elektrostatischer Abstoßung, von Spreitprozessen oder von Desorptionsereignissen, den Adsorptionsprozess unter realistischen physikalischen Bedingungen nachzuempfinden und spezifische biologische Fragestellungen zu beantworten. Aus Anpassungen von Simulationen an experimentelle Daten ergeben sich dynamische Parameter, wie z.B. die Transport-, Adsorptions-, Spreit- und die Desorptionsrate. Die experimentellen Daten wurden mit Hilfe der Quarzmikrowaagetechnik (QCM), der Impedanzspektroskopie (IS) und der Rasterkraftmikroskopie (AFM) erhoben. Zusätzlich zur Kinetik gibt die graphische Ausgabe des Simulationsprogramms Aufschluss über die Oberflächenverteilung der adsorbierten und gespreiteten Partikel. Bei der Untersuchung von Systemen, in denen die Adsorption reversibel ist und infolge dessen Desorptionsprozesse eine wichtige Rolle spielen, wurde mit Hilfe des Simulationsprogramms ein völlig neuartiges Sensorkonzept in Aussicht gestellt. Es wurde gezeigt, dass aus der Analyse des Leistungsspektrums der Fluktuationen in der Belegung die Kinetik von Adsorption und Desorption ermittelt werden kann. Dies ist aus sensorischer Sicht interessant, da eine Vielzahl störender Einflüsse, wie z.B: der Massentransport und die elektronische Drift in diesem Fall nicht berücksichtigt werden müssen.
Resumo:
Large numbers and functionally competent T cells are required to protect from diseases for which antibody-based vaccines have consistently failed (1), which is the case for many chronic viral infections and solid tumors. Therefore, therapeutic vaccines aim at the induction of strong antigen-specific T-cell responses. Novel adjuvants have considerably improved the capacity of synthetic vaccines to activate T cells, but more research is necessary to identify optimal compositions of potent vaccine formulations. Consequently, there is a great need to develop accurate methods for the efficient identification of antigen-specific T cells and the assessment of their functional characteristics directly ex vivo. In this regard, hundreds of clinical vaccination trials have been implemented during the last 15 years, and monitoring techniques become more and more standardized.
Resumo:
Prosthesis-patient mismatch (PPM) remains a controversial issue with the most recent stented biological valves. We analyzed the incidence of PPM after implantation of the Carpentier-Edwards Perimount Magna Ease aortic valve (PMEAV) bioprosthesis and assessed the early clinical outcome. Two hundred and seventy consecutive patients who received a PMEAV bioprosthesis between January 2007 and July 2008 were analyzed. Pre-, peri- and postoperative data were assessed and echocardiographic as well as clinical follow-up was performed. Mean age was 72+/-9 years, 168 (62.2%) were males. Fifty-seven patients (21.1%) were below 65 years of age. Absence of PPM, corresponding to an indexed effective orifice area >0.85 cm(2)/m(2), was 99.5%. Observed in-hospital mortality was 2.2% (six patients), with a predicted mortality according to the additive EuroSCORE of 7.6+/-3.1%. At echocardiographic assessment after a mean follow-up period of 150+/-91 days, mean transvalvular gradient was 11.8+/-4.8 mmHg (all valve sizes). No paravalvular leakage was seen. Nine patients died during follow-up. The Carpentier-Edwards PMEAV bioprosthesis shows excellent hemodynamic performance. This valve can be implanted in all sizes with an incidence of severe PPM below 0.5%.