957 resultados para floating


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Il presente lavoro presenta una analisi di sensitività sui parametri progettuali più significativi per i sistemi di ancoraggio di dispositivi di produzione di energia del mare di tipo galleggiante, comunemente conosciuti come Floating Wave Energy Converters (F-WEC). I convertitori di questo tipo sono installati offshore e possono basarsi su diversi principi di funzionamento per la produzione di energia: lo sfruttamento del moto oscillatorio dell’onda (chiamati Wave Active Bodies, gran parte di convertitori appartengono la tecnologia di questo tipo), la tracimazione delle onde (Overtopping Devices), o il principio della colonna d’acqua oscillante (Oscillating Water Columns). La scelta del luogo di installazione dei tali dispositivi implica una adeguata progettazione del sistema di ancoraggio che ha lo scopo di mantenere il dispositivo in un intorno sufficientemente piccolo del punto dove è stato originariamente collocato. Allo stesso tempo, dovrebbero considerarsi come elemento integrato del sistema da progettare al fine di aumentare l’efficienza d’estrazione della potenza d’onda. Le problematiche principali relativi ai sistemi di ancoraggio sono: la resistenza del sistema (affidabilità, fatica) e l’economicità. Le due problematiche sono legate tra di loro in quanto dall’aumento del resistenza dipende l’aumento della complessità del sistema di ancoraggio (aumentano il numero delle linee, si utilizzano diametri maggiori, aumenta il peso per unità di lunghezza per ogni linea, ecc.). E’ però chiaro che sistemi più affidabili consentirebbero di abbassare i costi di produzione e renderebbero certamente più competitiva l’energia da onda sul mercato energetico. I dispositivi individuali richiedono approcci progettuali diversi e l’economia di un sistema di ormeggio è strettamente legata al design del dispositivo stesso. Esistono, ad oggi, una serie di installazioni a scala quasi di prototipo di sistemi WEC che hanno fallito a causa del collasso per proprio sistema di ancoraggio, attirando così l’attenzione sul problema di una progettazione efficiente, affidabile e sicura.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis focuses on two aspects of European economic integration: exchange rate stabilization between non-euro Countries and the Euro Area, and real and nominal convergence of Central and Eastern European Countries. Each Chapter covers these aspects from both a theoretical and empirical perspective. Chapter 1 investigates whether the introduction of the euro was accompanied by a shift in the de facto exchange rate policy of European countries outside the euro area, using methods recently developed by the literature to detect "Fear of Floating" episodes. I find that European Inflation Targeters have tried to stabilize the euro exchange rate, after its introduction; fixed exchange rate arrangements, instead, apart from official policy changes, remained stable. Finally, the euro seems to have gained a relevant role as a reference currency even outside Europe. Chapter 2 proposes an approach to estimate Central Bank preferences starting from the Central Bank's optimization problem within a small open economy, using Sweden as a case study, to find whether stabilization of the exchange rate played a role in the Monetary Policy rule of the Riksbank. The results show that it did not influence interest rate setting; exchange rate stabilization probably occurred as a result of increased economic integration and business cycle convergence. Chapter 3 studies the interactions between wages in the public sector, the traded private sector and the closed sector in ten EU Transition Countries. The theoretical literature on wage spillovers suggests that the traded sector should be the leader in wage setting, with non-traded sectors wages adjusting. We show that large heterogeneity across countries is present, and sheltered and public sector wages are often leaders in wage determination. This result is relevant from a policy perspective since wage spillovers, leading to costs growing faster than productivity, may affect the international cost competitiveness of the traded sector.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L’acquifero freatico costiero ravennate è intensamente salinizzato fino a diversi km nell’entroterra. Il corpo dell’acquifero è formato da sabbie che poggiano su un substrato argilloso ad una profondità media di 25 m, i depositi affioranti sono sabbie e argille. Il lavoro svolto consiste in una caratterizzazione dello stato di salinizzazione con metodologie indirette (geoelettrica) e metodologie dirette (letture dei parametri fisici delle acque in pozzo). I sondaggi elettrici verticali (V.E.S.) mostrano stagionalità dovuta alle differenti quantità di pioggia e quindi di ricarica, le aree con depositi superficiali ad alta conducibilità idraulica (sabbie) hanno una lente d’acqua dolce compresa tra 0,1 e 2,25 m di spessore, al di sotto della quale troviamo una zona di mescolamento con spessori che vanno da 1,00 a 12,00 m, mentre quando in superficie abbiamo depositi a bassa conducibilità idraulica (limi sabbiosi e argille sabbiose) la lente d’acqua dolce scompare e la zona di mescolamento è sottile. Le misure dirette in pozzo mostrano una profondità della tavola d’acqua quasi ovunque sotto il livello del mare in entrambi i mesi monitorati, Giugno e Dicembre 2010, presentando una profondità leggermente maggiore nel mese di Dicembre. Dalla ricostruzione litologica risulta un acquifero composto da 4×109 m3 di sabbia, per cui ipotizzando una porosità media del 30% sono presenti 1,2×109 m3 di acqua. Dalla modellazione numerica (Modflow-SEAWAT 2000) risulta che l’origine dell’acqua salata che si trova in falda trova più facilmente spiegazione ipotizzando la sua presenza fin dalla formazione dell’acquifero, residuo delle acque marine che regredivano. Un’altra problematica analizzata è valutare l’applicazione della metodologia a minifiltri in uno studio sulla salinizzazione delle acque di falda. É stata implementata la costruzione di un transetto sperimentale, che ha permesso la mappatura dell’interfaccia acqua dolce/salmastra/salata con una precisione finora non raggiungibile.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Microprocessori basati su singolo processore (CPU), hanno visto una rapida crescita di performances ed un abbattimento dei costi per circa venti anni. Questi microprocessori hanno portato una potenza di calcolo nell’ordine del GFLOPS (Giga Floating Point Operation per Second) sui PC Desktop e centinaia di GFLOPS su clusters di server. Questa ascesa ha portato nuove funzionalità nei programmi, migliori interfacce utente e tanti altri vantaggi. Tuttavia questa crescita ha subito un brusco rallentamento nel 2003 a causa di consumi energetici sempre più elevati e problemi di dissipazione termica, che hanno impedito incrementi di frequenza di clock. I limiti fisici del silicio erano sempre più vicini. Per ovviare al problema i produttori di CPU (Central Processing Unit) hanno iniziato a progettare microprocessori multicore, scelta che ha avuto un impatto notevole sulla comunità degli sviluppatori, abituati a considerare il software come una serie di comandi sequenziali. Quindi i programmi che avevano sempre giovato di miglioramenti di prestazioni ad ogni nuova generazione di CPU, non hanno avuto incrementi di performance, in quanto essendo eseguiti su un solo core, non beneficiavano dell’intera potenza della CPU. Per sfruttare appieno la potenza delle nuove CPU la programmazione concorrente, precedentemente utilizzata solo su sistemi costosi o supercomputers, è diventata una pratica sempre più utilizzata dagli sviluppatori. Allo stesso tempo, l’industria videoludica ha conquistato una fetta di mercato notevole: solo nel 2013 verranno spesi quasi 100 miliardi di dollari fra hardware e software dedicati al gaming. Le software houses impegnate nello sviluppo di videogames, per rendere i loro titoli più accattivanti, puntano su motori grafici sempre più potenti e spesso scarsamente ottimizzati, rendendoli estremamente esosi in termini di performance. Per questo motivo i produttori di GPU (Graphic Processing Unit), specialmente nell’ultimo decennio, hanno dato vita ad una vera e propria rincorsa alle performances che li ha portati ad ottenere dei prodotti con capacità di calcolo vertiginose. Ma al contrario delle CPU che agli inizi del 2000 intrapresero la strada del multicore per continuare a favorire programmi sequenziali, le GPU sono diventate manycore, ovvero con centinaia e centinaia di piccoli cores che eseguono calcoli in parallelo. Questa immensa capacità di calcolo può essere utilizzata in altri campi applicativi? La risposta è si e l’obiettivo di questa tesi è proprio quello di constatare allo stato attuale, in che modo e con quale efficienza pùo un software generico, avvalersi dell’utilizzo della GPU invece della CPU.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The work presented in this thesis deals with complex materials, which were obtained by self-assembly of monodisperse colloidal particles, also called colloidal crystallization. Two main fields of interest were investigated, the first dealing with the fabrication of colloidal monolayers and nanostructures, which derive there from. The second turned the focus on the phononic properties of colloidal particles, crystals, and glasses. For the fabrication of colloidal monolayers a method is introduced, which is based on the sparse distribution of dry colloidal particles on a parent substrate. In the ensuing floating step the colloidal monolayer assembles readily at the three-phase-contact line, giving a 2D hexagonally ordered film under the right conditions. The unique feature of this fabrication process is an anisotropic shrinkage, which occurs alongside with the floating step. This phenomenon is exploited for the tailored structuring of colloidal monolayers, leading to designed hetero-monolayers by inkjet printing. Furthermore, the mechanical stability of the floating monolayers allows the deposition on hydrophobic substrates, which enables the fabrication of ultraflat nanostructured surfaces. Densely packed arrays of crescent shaped nanoparticles have also been synthesized. It is possible to stack those arrays in a 3D manner allowing to mutually orientate the individual layers. In a step towards 3D mesoporous materials a methodology to synthesize hierarchically structured inverse opals is introduced. The deposition of colloidal particles in the free voids of a host inverse opal allows for the fabrication of composite inverse opals on two length scales. The phononic properties of colloidal crystals and films are characterized by Brillouin light scattering (BLS). At first the resonant modes of colloidal particles consisting of polystyrene, a copolymer of methylmethacrylate and butylacrylate, or of a silica core-PMMA shell topography are investigated, giving insight into their individual mechanical properties. The infiltration of colloidal films with an index matching liquid allows measuring the phonon dispersion relation. This leads to the assignment of band gaps to the material under investigation. Here, two band gaps could be found, one originating from the fcc order in the colloidal crystal (Bragg gap), the other stemming from the vibrational eigenmodes of the colloidal particles (hybridization gap).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dynamics of a passive back-to-back test rig have been characterised, leading to a multi-coordinate approach for the analysis of arbitrary test configurations. Universal joints have been introduced into a typical pre-loaded back-to-back system in order to produce an oscillating torsional moment in a test specimen. Two different arrangements have been investigated using a frequency-based sub-structuring approach: the receptance method. A numerical model has been developed in accordance with this theory, allowing interconnection of systems with two-coordinates and closed multi-loop schemes. The model calculates the receptance functions and modal and deflected shapes of a general system. Closed form expressions of the following individual elements have been developed: a servomotor, damped continuous shaft and a universal joint. Numerical results for specific cases have been compared with published data in literature and experimental measurements undertaken in the present work. Due to the complexity of the universal joint and its oscillating dynamic effects, a more detailed analysis of this component has been developed. Two models have been presented. The first represents the joint as two inertias connected by a massless cross-piece. The second, derived by the dynamic analysis of a spherical four-link mechanism, considers the contribution of the floating element and its gyroscopic effects. An investigation into non-linear behaviour has led to a time domain model that utilises the Runge-Kutta fourth order method for resolution of the dynamic equations. It has been demonstrated that the torsional receptances of a universal joint, derived using the simple model, result in representation of the joint as an equivalent variable inertia. In order to verify the model, a test rig has been built and experimental validation undertaken. The variable inertia of a universal joint has lead to a novel application of the component as a passive device for the balancing of inertia variations in slider-crank mechanisms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L'obiettivo di questa tesi è stato quello di implementare un applicazione client-server per dispositivi Android basato sul paradigma del crowdsourcing. Il focus è stato rivolto sulla ricerca di un modo che consentisse all'utente di notificare degli eventi stradali senza distrarlo dalla guida, consentendogli di interagire vocalmente con il dispositivo per la segnalazione di differenti notifiche. Viene implementa un sistema di rilevazione delle velocità delle strade tramite l'invio di dati anonimi da parte degli utenti, che si integra con il sistema di notifica, consentendo una migliore rappresentazione della viabilità stradale. Inoltre è stato implementato anche un navigatore satellitare con tecnologia turn-by-turn da cui gli utenti possono effettuare itinerari, configurandosi, in ultima analisi, come una strumento in grado di supportare gli automobilisti da più punti di vista.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Natural hazards affecting industrial installations could directly or indirectly cause an accident or series of accidents with serious consequences for the environment and for human health. Accidents initiated by a natural hazard or disaster which result in the release of hazardous materials are commonly referred to as Natech (Natural Hazard Triggering a Technological Disaster) accidents. The conditions brought about by these kinds of events are particularly problematic, the presence of the natural event increases the probability of exposition and causes consequences more serious than standard technological accidents. Despite a growing body of research and more stringent regulations for the design and operation of industrial activities, Natech accidents remain a threat. This is partly due to the absence of data and dedicated risk-assessment methodologies and tools. Even the Seveso Directives for the control of risks due to major accident hazards do not include any specific impositions regarding the management of Natech risks in the process industries. Among the few available tools there is the European Standard EN 62305, which addresses generic industrial sites, requiring to take into account the possibility of lightning and to select the appropriate protection measures. Since it is intended for generic industrial installations, this tool set the requirements for the design, the construction and the modification of structures, and is thus mainly oriented towards conventional civil building. A first purpose of this project is to study the effects and the consequences on industrial sites of lightning, which is the most common adverse natural phenomenon in Europe. Lightning is the cause of several industrial accidents initiated by natural causes. The industrial sectors most susceptible to accidents triggered by lightning is the petrochemical one, due to the presence of atmospheric tanks (especially floating roof tanks) containing flammable vapors which could be easily ignited by a lightning strike or by lightning secondary effects (as electrostatic and electromagnetic pulses or ground currents). A second purpose of this work is to implement the procedure proposed by the European Standard on a specific kind of industrial plant, i.e. on a chemical factory, in order to highlight the critical aspects of this implementation. A case-study plant handling flammable liquids was selected. The application of the European Standard allowed to estimate the incidence of lightning activity on the total value of the default release frequency suggested by guidelines for atmospheric storage tanks. Though it has become evident that the European Standard does not introduce any parameters explicitly pointing out the amount of dangerous substances which could be ignited or released. Furthermore the parameters that are proposed to describe the characteristics of the structures potentially subjected to lightning strikes are insufficient to take into account the specific features of different chemical equipment commonly present in chemical plants.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis deals with heterogeneous architectures in standard workstations. Heterogeneous architectures represent an appealing alternative to traditional supercomputers because they are based on commodity components fabricated in large quantities. Hence their price-performance ratio is unparalleled in the world of high performance computing (HPC). In particular, different aspects related to the performance and consumption of heterogeneous architectures have been explored. The thesis initially focuses on an efficient implementation of a parallel application, where the execution time is dominated by an high number of floating point instructions. Then the thesis touches the central problem of efficient management of power peaks in heterogeneous computing systems. Finally it discusses a memory-bounded problem, where the execution time is dominated by the memory latency. Specifically, the following main contributions have been carried out: A novel framework for the design and analysis of solar field for Central Receiver Systems (CRS) has been developed. The implementation based on desktop workstation equipped with multiple Graphics Processing Units (GPUs) is motivated by the need to have an accurate and fast simulation environment for studying mirror imperfection and non-planar geometries. Secondly, a power-aware scheduling algorithm on heterogeneous CPU-GPU architectures, based on an efficient distribution of the computing workload to the resources, has been realized. The scheduler manages the resources of several computing nodes with a view to reducing the peak power. The two main contributions of this work follow: the approach reduces the supply cost due to high peak power whilst having negligible impact on the parallelism of computational nodes. from another point of view the developed model allows designer to increase the number of cores without increasing the capacity of the power supply unit. Finally, an implementation for efficient graph exploration on reconfigurable architectures is presented. The purpose is to accelerate graph exploration, reducing the number of random memory accesses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The thesis analyses the hydrodynamic induced by an array of Wave energy Converters (WECs), under an experimental and numerical point of view. WECs can be considered an innovative solution able to contribute to the green energy supply and –at the same time– to protect the rear coastal area under marine spatial planning considerations. This research activity essentially rises due to this combined concept. The WEC under exam is a floating device belonging to the Wave Activated Bodies (WAB) class. Experimental data were performed at Aalborg University in different scales and layouts, and the performance of the models was analysed under a variety of irregular wave attacks. The numerical simulations performed with the codes MIKE 21 BW and ANSYS-AQWA. Experimental results were also used to calibrate the numerical parameters and/or to directly been compared to numerical results, in order to extend the experimental database. Results of the research activity are summarized in terms of device performance and guidelines for a future wave farm installation. The device length should be “tuned” based on the local climate conditions. The wave transmission behind the devices is pretty high, suggesting that the tested layout should be considered as a module of a wave farm installation. Indications on the minimum inter-distance among the devices are provided. Furthermore, a CALM mooring system leads to lower wave transmission and also larger power production than a spread mooring. The two numerical codes have different potentialities. The hydrodynamics around single and multiple devices is obtained with MIKE 21 BW, while wave loads and motions for a single moored device are derived from ANSYS-AQWA. Combining the experimental and numerical it is suggested –for both coastal protection and energy production– to adopt a staggered layout, which will maximise the devices density and minimize the marine space required for the installation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of linear programming in various areas has increased with the significant improvement of specialized solvers. Linear programs are used as such to model practical problems, or as subroutines in algorithms such as formal proofs or branch-and-cut frameworks. In many situations a certified answer is needed, for example the guarantee that the linear program is feasible or infeasible, or a provably safe bound on its objective value. Most of the available solvers work with floating-point arithmetic and are thus subject to its shortcomings such as rounding errors or underflow, therefore they can deliver incorrect answers. While adequate for some applications, this is unacceptable for critical applications like flight controlling or nuclear plant management due to the potential catastrophic consequences. We propose a method that gives a certified answer whether a linear program is feasible or infeasible, or returns unknown'. The advantage of our method is that it is reasonably fast and rarely answers unknown'. It works by computing a safe solution that is in some way the best possible in the relative interior of the feasible set. To certify the relative interior, we employ exact arithmetic, whose use is nevertheless limited in general to critical places, allowing us to rnremain computationally efficient. Moreover, when certain conditions are fulfilled, our method is able to deliver a provable bound on the objective value of the linear program. We test our algorithm on typical benchmark sets and obtain higher rates of success compared to previous approaches for this problem, while keeping the running times acceptably small. The computed objective value bounds are in most of the cases very close to the known exact objective values. We prove the usability of the method we developed by additionally employing a variant of it in a different scenario, namely to improve the results of a Satisfiability Modulo Theories solver. Our method is used as a black box in the nodes of a branch-and-bound tree to implement conflict learning based on the certificate of infeasibility for linear programs consisting of subsets of linear constraints. The generated conflict clauses are in general small and give good rnprospects for reducing the search space. Compared to other methods we obtain significant improvements in the running time, especially on the large instances.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Am vertikalen Windkanal der Johannes Gutenberg-Universität Mainz wurden physikalische und chemische Bereifungsexperimente durchgeführt. Dabei lagen die Umgebungstemperaturen bei allen Experimenten zwischen etwa -15 und -5°C und der Flüssigwassergehalt erstreckte sich von 0,9 bis etwa 1,6g/m³, typische Bedingungen für Mischphasenwolken in denen Bereifung stattfindet. Oberflächentemperaturmessungen an wachsenden hängenden Graupelpartikeln zeigten, dass während der Experimente trockene Wachstumsbedingungen herrschten.rnZunächst wurde das Graupelwachstum an in einer laminaren Strömung frei schwebenden Eispartikeln mit Anfangsradien zwischen 290 und 380µm, die mit flüssigen unterkühlten Wolkentröpfchen bereift wurden, studiert. Ziel war es, den Kollektionskern aus der Massenzunahme des bereiften Eispartikels und dem mittleren Flüssigwassergehalt während des Wachstumsexperimentes zu bestimmen. Die ermittelten Werte für die Kollektionskerne der bereiften Eispartikel erstreckten sich von 0,9 bis 2,3cm³/s in Abhängigkeit ihres Kollektorimpulses (Masse * Fallgeschwindigkeit des bereifenden Graupels), der zwischen 0,04 und 0,10gcm/s lag. Bei den Experimenten zeigte sich, dass die hier gemessenen Kollektionskerne höher waren im Vergleich mit Kollektionskernen flüssiger Tropfen untereinander. Aus den aktuellen Ergebnissen dieser Arbeit und der vorhandenen Literaturwerte wurde ein empirischer Faktor entwickelt, der von dem Wolkentröpfchenradius abhängig ist und diesen Unterschied beschreibt. Für die untersuchten Größenbereiche von Kollektorpartikel und flüssigen Tröpfchen können die korrigierten Kollektionskernwerte in Wolkenmodelle für die entsprechenden Größen eingebunden werden.rnBei den chemischen Experimenten zu dieser Arbeit wurde die Spurenstoffaufnahme verschiedener atmosphärischer Spurengase (HNO3, HCl, H2O2, NH3 und SO2) während der Bereifung untersucht. Diese Experimente mussten aus technischen Gründen mit hängenden Eispartikeln, dendritischen Eiskristallen und Schneeflocken, bereift mit flüssigen Wolkenlösungströpfchen, durchgeführt werden.rnDie Konzentrationen der Lösung, aus der die Wolkentröpfchen mit Hilfe von Zweistoffdüsen erzeugt wurden, lagen zwischen 1 und 120mg/l. Für die Experimente mit Ammoniak und Schwefeldioxid wurden Konzentrationen zwischen 1 und 22mg/l verwendet. Das Schmelzwasser der bereiften hängenden Graupel und Schneeflocken wurden ionenchromatographisch analysiert und zusammen mit der bekannten Konzentration der bereifenden Wolkentröpfchen konnte der Retentionskoeffizient für jeden Spurenstoff bestimmt werden. Er gibt die Menge an Spurenstoff an, die bei der Phasenumwandlung von flüssig zu fest in die Eisphase übergeht. Salpetersäure und Salzsäure waren nahezu vollständig retiniert (Mittelwerte der gesamten Experimente entsprechend 99±8% und 100±9%). Für Wasserstoffperoxid wurde ein mittlerer Retentionskoeffizient von 65±17% bestimmt. rnDer mittlere Retentionskoeffizient von Ammoniak ergab sich unabhängig vom Flüssigwassergehalt zu 92±21%, während sich für Schwefeldioxid 53±10% für niedrige und 29±7% für hohe Flüssigphasenkonzentrationen ergaben. Bei einigen der untersuchten Spurenstoffe wurde eine Temperaturabhängigkeit beobachtet und wenn möglich durch Parametrisierungen beschrieben.rn

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Adriatic Sea is considered a feeding and developmental area for Mediterranean loggerhead turtles, but this area is severely threatened by human impacts. In the Adriatic Sea loggerhead turtles are often found stranded or floating, but they are also recovered as by-catch from fishing activities. Nevertheless, information about population structuring and origin of individuals found in the Adriatic Sea are still limited. Cooperation with fishermen and a good network of voluntary collaborators are essential for understanding their distribution, ecology and for developing conservation strategies in the Adriatic Sea. In this study, a comparative analysis of biometric data and DNA sequence polymorphism of the long fragment of the mitochondrial control region was carried out on ninety-three loggerheads recovered from three feeding areas in the Adriatic Sea: North-western, North-eastern and South Adriatic. Differences in turtles body sizes (e.g. Straight Carapace Length) among the three recovery areas and relationship between SCL and the type of recovery were investigated. The origin of turtles from Mediterranean rookeries and the use of the Adriatic feeding habitats by loggerheads in different life-stages were assessed to understand the migratory pathway of the species. The analysis of biometric data revealed a significant difference in turtle sizes between the Southern and the Northern Adriatic. Moreover, size of captured turtles resulted significantly different from the size of stranded and floating individuals. Actually, neritic sub-adults and adults are more affected by incidental captures than juveniles because of their feeding behavior. The Bayesian mixed-stock analysis showed a strong genetic relationship between the Adriatic aggregates and Mediterranean rookeries, while a low pro¬portion of individuals of Atlantic origin were detected in the Adriatic feeding grounds. The presence of migratory pathways towards the Adriatic Sea due to the surface current system was reinforced by the finding of individuals bearing haplotypes endemic to the nesting populations of Libya, Greece and Israel. A relatively high contribution from Turkey and Cyprus to the Northwest and South Adriatic populations was identified when the three sampled areas were analyzed independently. These results have to be taken in account in a conservative perspective, since coastal hazards, affecting the population of turtles feeding in the Adriatic Sea may also affect the nesting populations of the Eastern Mediterranean with a unique genetic pattern.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The research for exact solutions of mixed integer problems is an active topic in the scientific community. State-of-the-art MIP solvers exploit a floating- point numerical representation, therefore introducing small approximations. Although such MIP solvers yield reliable results for the majority of problems, there are cases in which a higher accuracy is required. Indeed, it is known that for some applications floating-point solvers provide falsely feasible solutions, i.e. solutions marked as feasible because of approximations that would not pass a check with exact arithmetic and cannot be practically implemented. The framework of the current dissertation is SCIP, a mixed integer programs solver mainly developed at Zuse Institute Berlin. In the same site we considered a new approach for exactly solving MIPs. Specifically, we developed a constraint handler to plug into SCIP, with the aim to analyze the accuracy of provided floating-point solutions and compute exact primal solutions starting from floating-point ones. We conducted a few computational experiments to test the exact primal constraint handler through the adoption of two main settings. Analysis mode allowed to collect statistics about current SCIP solutions' reliability. Our results confirm that floating-point solutions are accurate enough with respect to many instances. However, our analysis highlighted the presence of numerical errors of variable entity. By using the enforce mode, our constraint handler is able to suggest exact solutions starting from the integer part of a floating-point solution. With the latter setting, results show a general improvement of the quality of provided final solutions, without a significant loss of performances.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a kernel density correlation based nonrigid point set matching method and shows its application in statistical model based 2D/3D reconstruction of a scaled, patient-specific model from an un-calibrated x-ray radiograph. In this method, both the reference point set and the floating point set are first represented using kernel density estimates. A correlation measure between these two kernel density estimates is then optimized to find a displacement field such that the floating point set is moved to the reference point set. Regularizations based on the overall deformation energy and the motion smoothness energy are used to constraint the displacement field for a robust point set matching. Incorporating this non-rigid point set matching method into a statistical model based 2D/3D reconstruction framework, we can reconstruct a scaled, patient-specific model from noisy edge points that are extracted directly from the x-ray radiograph by an edge detector. Our experiment conducted on datasets of two patients and six cadavers demonstrates a mean reconstruction error of 1.9 mm