952 resultados para floating
Resumo:
Octopus vulgaris on-growing in floating cages is a promising activity implemented in Spain at industrial level, with productions of 16-32 tons/year from 1998. Nevertheless, some aspects of the culture system need to be evaluated to warrantee its profitability. In the present study two rearing systems and two dietary treatments were evaluated. Individual and group rearing, in PVC net compartments and floating cages respectively, were compared under two dietary treatments. One diet was composed by bogue, supplied as ?discarded? species from local fish farms, and the other was based on a 40-60% discarded bogue-crab Portunus pelagicus. All octopuses were PIT-tagged and the experiment lasted two months. Animals were sampled once throughout the experimental period and absolute growth rate (AGR, g./day) and mortality (%) were calculated. AGR of group rearing was above 30 g./day, however individual rearing showed 100% survival so biomass increment was higher. On the other hand, males grew more than females regardless of dietary treatment.
Resumo:
Programa de doctorado: Ecología y gestión de recursos vivos marinos
Resumo:
[ES] Debido a la necesidad de proponer estructuras de protección costera que sean ambientalmente viables y operacionalmente efectivas, se propone como estructura para proteger el muelle de la Estación de Guardacostas de Santa Marta, Colombia, un Rompeolas Flotante; empleando para su diseño y validación un modelo en numérico de partículas en 2D, conocido como Smoothed Particle Hydrodynamics. Desarrollando todo el proceso de validación y análisis de los casos para obtener resultados confiables y que permitan proponer un Rompeolas Flotante con dimensiones concretas, que atenúe el efecto del oleaje incidente en el muelle.
Resumo:
[EN] The aortic dissection is a disease that can cause a deadly situation, even with a correct treatment. It consists in a rupture of a layer of the aortic artery wall, causing a blood flow inside this rupture, called dissection. The aim of this paper is to contribute to its diagnosis, detecting the dissection edges inside the aorta. A subpixel accuracy edge detector based on the hypothesis of partial volume effect is used, where the intensity of an edge pixel is the sum of the contribution of each color weighted by its relative area inside the pixel. The method uses a floating window centred on the edge pixel and computes the edge features. The accuracy of our method is evaluated on synthetic images of different hickness and noise levels, obtaining an edge detection with a maximal mean error lower than 16 percent of a pixel.
Resumo:
[EN]The present study aimed to determine the spawning efficacy, egg quality and quantity of captive breed meagre induced with a single gonadotrophin-releasing hormone agonist (GnRHa) injection of 0, 1, 5, 10, 15, 20, 25, 30, 40 or 50 μg kg–1 to determine a recommended optimum dose to induce spawning. The doses 10, 15 and 20 μg kg–1 gave eggs with the highest quality (measured as: percentage of viability, floating, fertilisation and hatch) and quantity (measured as: total number of eggs, number of viable eggs, number of floating eggs, number of hatched larvae and number of larvae that reabsorbed the yolk sac). All egg quantity parameters were described by Gaussian regression analysis with R2 = 0.89 or R2 = 0.88. The Gaussian regression analysis identified that the optimal dose used was 15 μg kg–1. The regression analysis highlighted that this comprehensive study examined doses that ranged from low doses insufficient to stimulate a high spawning response (significantly lower egg quantities, p < 0.05) compared to 15 μg kg–1 through to high doses that stimulated the spawning of significantly lower egg quantities and eggs with significantly lower quality (egg viability). In addition, the latency period (time from hormone application to spawning) decreased with increasing doses to give a regression (R2 = 0.93), which suggests that higher doses accelerated oocyte development that in turn reduced egg quality and quantity. The identification of an optimal dose for the spawning of meagre, which has high aquaculture potential, represents an important advance for the Mediterranean aquaculture industry.
Resumo:
[EN]Energy transmission through a box-shaped floating breakwater (FB) is examined, under simplified conditions, by using the smoothed particle hydrodynamics (SPH) method, a mesh-free particle numerical approach. The efficiency of the structure is assessed in terms of the coefficient of transm ission as a function of the wave period and the location of the floating breakwater relative to the zone to be protected. Preliminary results conceming wave energy transmission reveals a clear improvement of the efficiency as wave period decreases andan important role ofthe bathymetry.
Resumo:
Il presente lavoro presenta una analisi di sensitività sui parametri progettuali più significativi per i sistemi di ancoraggio di dispositivi di produzione di energia del mare di tipo galleggiante, comunemente conosciuti come Floating Wave Energy Converters (F-WEC). I convertitori di questo tipo sono installati offshore e possono basarsi su diversi principi di funzionamento per la produzione di energia: lo sfruttamento del moto oscillatorio dell’onda (chiamati Wave Active Bodies, gran parte di convertitori appartengono la tecnologia di questo tipo), la tracimazione delle onde (Overtopping Devices), o il principio della colonna d’acqua oscillante (Oscillating Water Columns). La scelta del luogo di installazione dei tali dispositivi implica una adeguata progettazione del sistema di ancoraggio che ha lo scopo di mantenere il dispositivo in un intorno sufficientemente piccolo del punto dove è stato originariamente collocato. Allo stesso tempo, dovrebbero considerarsi come elemento integrato del sistema da progettare al fine di aumentare l’efficienza d’estrazione della potenza d’onda. Le problematiche principali relativi ai sistemi di ancoraggio sono: la resistenza del sistema (affidabilità, fatica) e l’economicità. Le due problematiche sono legate tra di loro in quanto dall’aumento del resistenza dipende l’aumento della complessità del sistema di ancoraggio (aumentano il numero delle linee, si utilizzano diametri maggiori, aumenta il peso per unità di lunghezza per ogni linea, ecc.). E’ però chiaro che sistemi più affidabili consentirebbero di abbassare i costi di produzione e renderebbero certamente più competitiva l’energia da onda sul mercato energetico. I dispositivi individuali richiedono approcci progettuali diversi e l’economia di un sistema di ormeggio è strettamente legata al design del dispositivo stesso. Esistono, ad oggi, una serie di installazioni a scala quasi di prototipo di sistemi WEC che hanno fallito a causa del collasso per proprio sistema di ancoraggio, attirando così l’attenzione sul problema di una progettazione efficiente, affidabile e sicura.
Resumo:
This thesis focuses on two aspects of European economic integration: exchange rate stabilization between non-euro Countries and the Euro Area, and real and nominal convergence of Central and Eastern European Countries. Each Chapter covers these aspects from both a theoretical and empirical perspective. Chapter 1 investigates whether the introduction of the euro was accompanied by a shift in the de facto exchange rate policy of European countries outside the euro area, using methods recently developed by the literature to detect "Fear of Floating" episodes. I find that European Inflation Targeters have tried to stabilize the euro exchange rate, after its introduction; fixed exchange rate arrangements, instead, apart from official policy changes, remained stable. Finally, the euro seems to have gained a relevant role as a reference currency even outside Europe. Chapter 2 proposes an approach to estimate Central Bank preferences starting from the Central Bank's optimization problem within a small open economy, using Sweden as a case study, to find whether stabilization of the exchange rate played a role in the Monetary Policy rule of the Riksbank. The results show that it did not influence interest rate setting; exchange rate stabilization probably occurred as a result of increased economic integration and business cycle convergence. Chapter 3 studies the interactions between wages in the public sector, the traded private sector and the closed sector in ten EU Transition Countries. The theoretical literature on wage spillovers suggests that the traded sector should be the leader in wage setting, with non-traded sectors wages adjusting. We show that large heterogeneity across countries is present, and sheltered and public sector wages are often leaders in wage determination. This result is relevant from a policy perspective since wage spillovers, leading to costs growing faster than productivity, may affect the international cost competitiveness of the traded sector.
Resumo:
L’acquifero freatico costiero ravennate è intensamente salinizzato fino a diversi km nell’entroterra. Il corpo dell’acquifero è formato da sabbie che poggiano su un substrato argilloso ad una profondità media di 25 m, i depositi affioranti sono sabbie e argille. Il lavoro svolto consiste in una caratterizzazione dello stato di salinizzazione con metodologie indirette (geoelettrica) e metodologie dirette (letture dei parametri fisici delle acque in pozzo). I sondaggi elettrici verticali (V.E.S.) mostrano stagionalità dovuta alle differenti quantità di pioggia e quindi di ricarica, le aree con depositi superficiali ad alta conducibilità idraulica (sabbie) hanno una lente d’acqua dolce compresa tra 0,1 e 2,25 m di spessore, al di sotto della quale troviamo una zona di mescolamento con spessori che vanno da 1,00 a 12,00 m, mentre quando in superficie abbiamo depositi a bassa conducibilità idraulica (limi sabbiosi e argille sabbiose) la lente d’acqua dolce scompare e la zona di mescolamento è sottile. Le misure dirette in pozzo mostrano una profondità della tavola d’acqua quasi ovunque sotto il livello del mare in entrambi i mesi monitorati, Giugno e Dicembre 2010, presentando una profondità leggermente maggiore nel mese di Dicembre. Dalla ricostruzione litologica risulta un acquifero composto da 4×109 m3 di sabbia, per cui ipotizzando una porosità media del 30% sono presenti 1,2×109 m3 di acqua. Dalla modellazione numerica (Modflow-SEAWAT 2000) risulta che l’origine dell’acqua salata che si trova in falda trova più facilmente spiegazione ipotizzando la sua presenza fin dalla formazione dell’acquifero, residuo delle acque marine che regredivano. Un’altra problematica analizzata è valutare l’applicazione della metodologia a minifiltri in uno studio sulla salinizzazione delle acque di falda. É stata implementata la costruzione di un transetto sperimentale, che ha permesso la mappatura dell’interfaccia acqua dolce/salmastra/salata con una precisione finora non raggiungibile.
Resumo:
Microprocessori basati su singolo processore (CPU), hanno visto una rapida crescita di performances ed un abbattimento dei costi per circa venti anni. Questi microprocessori hanno portato una potenza di calcolo nell’ordine del GFLOPS (Giga Floating Point Operation per Second) sui PC Desktop e centinaia di GFLOPS su clusters di server. Questa ascesa ha portato nuove funzionalità nei programmi, migliori interfacce utente e tanti altri vantaggi. Tuttavia questa crescita ha subito un brusco rallentamento nel 2003 a causa di consumi energetici sempre più elevati e problemi di dissipazione termica, che hanno impedito incrementi di frequenza di clock. I limiti fisici del silicio erano sempre più vicini. Per ovviare al problema i produttori di CPU (Central Processing Unit) hanno iniziato a progettare microprocessori multicore, scelta che ha avuto un impatto notevole sulla comunità degli sviluppatori, abituati a considerare il software come una serie di comandi sequenziali. Quindi i programmi che avevano sempre giovato di miglioramenti di prestazioni ad ogni nuova generazione di CPU, non hanno avuto incrementi di performance, in quanto essendo eseguiti su un solo core, non beneficiavano dell’intera potenza della CPU. Per sfruttare appieno la potenza delle nuove CPU la programmazione concorrente, precedentemente utilizzata solo su sistemi costosi o supercomputers, è diventata una pratica sempre più utilizzata dagli sviluppatori. Allo stesso tempo, l’industria videoludica ha conquistato una fetta di mercato notevole: solo nel 2013 verranno spesi quasi 100 miliardi di dollari fra hardware e software dedicati al gaming. Le software houses impegnate nello sviluppo di videogames, per rendere i loro titoli più accattivanti, puntano su motori grafici sempre più potenti e spesso scarsamente ottimizzati, rendendoli estremamente esosi in termini di performance. Per questo motivo i produttori di GPU (Graphic Processing Unit), specialmente nell’ultimo decennio, hanno dato vita ad una vera e propria rincorsa alle performances che li ha portati ad ottenere dei prodotti con capacità di calcolo vertiginose. Ma al contrario delle CPU che agli inizi del 2000 intrapresero la strada del multicore per continuare a favorire programmi sequenziali, le GPU sono diventate manycore, ovvero con centinaia e centinaia di piccoli cores che eseguono calcoli in parallelo. Questa immensa capacità di calcolo può essere utilizzata in altri campi applicativi? La risposta è si e l’obiettivo di questa tesi è proprio quello di constatare allo stato attuale, in che modo e con quale efficienza pùo un software generico, avvalersi dell’utilizzo della GPU invece della CPU.
Resumo:
The work presented in this thesis deals with complex materials, which were obtained by self-assembly of monodisperse colloidal particles, also called colloidal crystallization. Two main fields of interest were investigated, the first dealing with the fabrication of colloidal monolayers and nanostructures, which derive there from. The second turned the focus on the phononic properties of colloidal particles, crystals, and glasses. For the fabrication of colloidal monolayers a method is introduced, which is based on the sparse distribution of dry colloidal particles on a parent substrate. In the ensuing floating step the colloidal monolayer assembles readily at the three-phase-contact line, giving a 2D hexagonally ordered film under the right conditions. The unique feature of this fabrication process is an anisotropic shrinkage, which occurs alongside with the floating step. This phenomenon is exploited for the tailored structuring of colloidal monolayers, leading to designed hetero-monolayers by inkjet printing. Furthermore, the mechanical stability of the floating monolayers allows the deposition on hydrophobic substrates, which enables the fabrication of ultraflat nanostructured surfaces. Densely packed arrays of crescent shaped nanoparticles have also been synthesized. It is possible to stack those arrays in a 3D manner allowing to mutually orientate the individual layers. In a step towards 3D mesoporous materials a methodology to synthesize hierarchically structured inverse opals is introduced. The deposition of colloidal particles in the free voids of a host inverse opal allows for the fabrication of composite inverse opals on two length scales. The phononic properties of colloidal crystals and films are characterized by Brillouin light scattering (BLS). At first the resonant modes of colloidal particles consisting of polystyrene, a copolymer of methylmethacrylate and butylacrylate, or of a silica core-PMMA shell topography are investigated, giving insight into their individual mechanical properties. The infiltration of colloidal films with an index matching liquid allows measuring the phonon dispersion relation. This leads to the assignment of band gaps to the material under investigation. Here, two band gaps could be found, one originating from the fcc order in the colloidal crystal (Bragg gap), the other stemming from the vibrational eigenmodes of the colloidal particles (hybridization gap).
Resumo:
The dynamics of a passive back-to-back test rig have been characterised, leading to a multi-coordinate approach for the analysis of arbitrary test configurations. Universal joints have been introduced into a typical pre-loaded back-to-back system in order to produce an oscillating torsional moment in a test specimen. Two different arrangements have been investigated using a frequency-based sub-structuring approach: the receptance method. A numerical model has been developed in accordance with this theory, allowing interconnection of systems with two-coordinates and closed multi-loop schemes. The model calculates the receptance functions and modal and deflected shapes of a general system. Closed form expressions of the following individual elements have been developed: a servomotor, damped continuous shaft and a universal joint. Numerical results for specific cases have been compared with published data in literature and experimental measurements undertaken in the present work. Due to the complexity of the universal joint and its oscillating dynamic effects, a more detailed analysis of this component has been developed. Two models have been presented. The first represents the joint as two inertias connected by a massless cross-piece. The second, derived by the dynamic analysis of a spherical four-link mechanism, considers the contribution of the floating element and its gyroscopic effects. An investigation into non-linear behaviour has led to a time domain model that utilises the Runge-Kutta fourth order method for resolution of the dynamic equations. It has been demonstrated that the torsional receptances of a universal joint, derived using the simple model, result in representation of the joint as an equivalent variable inertia. In order to verify the model, a test rig has been built and experimental validation undertaken. The variable inertia of a universal joint has lead to a novel application of the component as a passive device for the balancing of inertia variations in slider-crank mechanisms.
Resumo:
L'obiettivo di questa tesi è stato quello di implementare un applicazione client-server per dispositivi Android basato sul paradigma del crowdsourcing. Il focus è stato rivolto sulla ricerca di un modo che consentisse all'utente di notificare degli eventi stradali senza distrarlo dalla guida, consentendogli di interagire vocalmente con il dispositivo per la segnalazione di differenti notifiche. Viene implementa un sistema di rilevazione delle velocità delle strade tramite l'invio di dati anonimi da parte degli utenti, che si integra con il sistema di notifica, consentendo una migliore rappresentazione della viabilità stradale. Inoltre è stato implementato anche un navigatore satellitare con tecnologia turn-by-turn da cui gli utenti possono effettuare itinerari, configurandosi, in ultima analisi, come una strumento in grado di supportare gli automobilisti da più punti di vista.
Resumo:
Natural hazards affecting industrial installations could directly or indirectly cause an accident or series of accidents with serious consequences for the environment and for human health. Accidents initiated by a natural hazard or disaster which result in the release of hazardous materials are commonly referred to as Natech (Natural Hazard Triggering a Technological Disaster) accidents. The conditions brought about by these kinds of events are particularly problematic, the presence of the natural event increases the probability of exposition and causes consequences more serious than standard technological accidents. Despite a growing body of research and more stringent regulations for the design and operation of industrial activities, Natech accidents remain a threat. This is partly due to the absence of data and dedicated risk-assessment methodologies and tools. Even the Seveso Directives for the control of risks due to major accident hazards do not include any specific impositions regarding the management of Natech risks in the process industries. Among the few available tools there is the European Standard EN 62305, which addresses generic industrial sites, requiring to take into account the possibility of lightning and to select the appropriate protection measures. Since it is intended for generic industrial installations, this tool set the requirements for the design, the construction and the modification of structures, and is thus mainly oriented towards conventional civil building. A first purpose of this project is to study the effects and the consequences on industrial sites of lightning, which is the most common adverse natural phenomenon in Europe. Lightning is the cause of several industrial accidents initiated by natural causes. The industrial sectors most susceptible to accidents triggered by lightning is the petrochemical one, due to the presence of atmospheric tanks (especially floating roof tanks) containing flammable vapors which could be easily ignited by a lightning strike or by lightning secondary effects (as electrostatic and electromagnetic pulses or ground currents). A second purpose of this work is to implement the procedure proposed by the European Standard on a specific kind of industrial plant, i.e. on a chemical factory, in order to highlight the critical aspects of this implementation. A case-study plant handling flammable liquids was selected. The application of the European Standard allowed to estimate the incidence of lightning activity on the total value of the default release frequency suggested by guidelines for atmospheric storage tanks. Though it has become evident that the European Standard does not introduce any parameters explicitly pointing out the amount of dangerous substances which could be ignited or released. Furthermore the parameters that are proposed to describe the characteristics of the structures potentially subjected to lightning strikes are insufficient to take into account the specific features of different chemical equipment commonly present in chemical plants.
Resumo:
This thesis deals with heterogeneous architectures in standard workstations. Heterogeneous architectures represent an appealing alternative to traditional supercomputers because they are based on commodity components fabricated in large quantities. Hence their price-performance ratio is unparalleled in the world of high performance computing (HPC). In particular, different aspects related to the performance and consumption of heterogeneous architectures have been explored. The thesis initially focuses on an efficient implementation of a parallel application, where the execution time is dominated by an high number of floating point instructions. Then the thesis touches the central problem of efficient management of power peaks in heterogeneous computing systems. Finally it discusses a memory-bounded problem, where the execution time is dominated by the memory latency. Specifically, the following main contributions have been carried out: A novel framework for the design and analysis of solar field for Central Receiver Systems (CRS) has been developed. The implementation based on desktop workstation equipped with multiple Graphics Processing Units (GPUs) is motivated by the need to have an accurate and fast simulation environment for studying mirror imperfection and non-planar geometries. Secondly, a power-aware scheduling algorithm on heterogeneous CPU-GPU architectures, based on an efficient distribution of the computing workload to the resources, has been realized. The scheduler manages the resources of several computing nodes with a view to reducing the peak power. The two main contributions of this work follow: the approach reduces the supply cost due to high peak power whilst having negligible impact on the parallelism of computational nodes. from another point of view the developed model allows designer to increase the number of cores without increasing the capacity of the power supply unit. Finally, an implementation for efficient graph exploration on reconfigurable architectures is presented. The purpose is to accelerate graph exploration, reducing the number of random memory accesses.