15 resultados para LOCAL SCALE-INVARIANCE
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
La Tesi analizza le relazioni tra i processi di sviluppo agricolo e l’uso delle risorse naturali, in particolare di quelle energetiche, a livello internazionale (paesi in via di sviluppo e sviluppati), nazionale (Italia), regionale (Emilia Romagna) e aziendale, con lo scopo di valutare l’eco-efficienza dei processi di sviluppo agricolo, la sua evoluzione nel tempo e le principali dinamiche in relazione anche ai problemi di dipendenza dalle risorse fossili, della sicurezza alimentare, della sostituzione tra superfici agricole dedicate all’alimentazione umana ed animale. Per i due casi studio a livello macroeconomico è stata adottata la metodologia denominata “SUMMA” SUstainability Multi-method, multi-scale Assessment (Ulgiati et al., 2006), che integra una serie di categorie d’impatto dell’analisi del ciclo di vita, LCA, valutazioni costi-benefici e la prospettiva di analisi globale della contabilità emergetica. L’analisi su larga scala è stata ulteriormente arricchita da un caso studio sulla scala locale, di una fattoria produttrice di latte e di energia elettrica rinnovabile (fotovoltaico e biogas). Lo studio condotto mediante LCA e valutazione contingente ha valutato gli effetti ambientali, economici e sociali di scenari di riduzione della dipendenza dalle fonti fossili. I casi studio a livello macroeconomico dimostrano che, nonostante le politiche di supporto all’aumento di efficienza e a forme di produzione “verdi”, l’agricoltura a livello globale continua ad evolvere con un aumento della sua dipendenza dalle fonti energetiche fossili. I primi effetti delle politiche agricole comunitarie verso una maggiore sostenibilità sembrano tuttavia intravedersi per i Paesi Europei. Nel complesso la energy footprint si mantiene alta poiché la meccanizzazione continua dei processi agricoli deve necessariamente attingere da fonti energetiche sostitutive al lavoro umano. Le terre agricole diminuiscono nei paesi europei analizzati e in Italia aumentando i rischi d’insicurezza alimentare giacché la popolazione nazionale sta invece aumentando.
Resumo:
Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.
Resumo:
We use data from about 700 GPS stations in the EuroMediterranen region to investigate the present-day behavior of the the Calabrian subduction zone within the Mediterranean-scale plates kinematics and to perform local scale studies about the strain accumulation on active structures. We focus attenction on the Messina Straits and Crati Valley faults where GPS data show extentional velocity gradients of ∼3 mm/yr and ∼2 mm/yr, respectively. We use dislocation model and a non-linear constrained optimization algorithm to invert for fault geometric parameters and slip-rates and evaluate the associated uncertainties adopting a bootstrap approach. Our analysis suggest the presence of two partially locked normal faults. To investigate the impact of elastic strain contributes from other nearby active faults onto the observed velocity gradient we use a block modeling approach. Our models show that the inferred slip-rates on the two analyzed structures are strongly impacted by the assumed locking width of the Calabrian subduction thrust. In order to frame the observed local deformation features within the present- day central Mediterranean kinematics we realyze a statistical analysis testing the indipendent motion (w.r.t. the African and Eurasias plates) of the Adriatic, Cal- abrian and Sicilian blocks. Our preferred model confirms a microplate like behaviour for all the investigated blocks. Within these kinematic boundary conditions we fur- ther investigate the Calabrian Slab interface geometry using a combined approach of block modeling and χ2ν statistic. Almost no information is obtained using only the horizontal GPS velocities that prove to be a not sufficient dataset for a multi-parametric inversion approach. Trying to stronger constrain the slab geometry we estimate the predicted vertical velocities performing suites of forward models of elastic dislocations varying the fault locking depth. Comparison with the observed field suggest a maximum resolved locking depth of 25 km.
Resumo:
Chlorinated solvents are the most ubiquitous organic contaminants found in groundwater since the last five decades. They generally reach groundwater as Dense Non-Aqueous Phase Liquid (DNAPL). This phase can migrate through aquifers, and also through aquitards, in ways that aqueous contaminants cannot. The complex phase partitioning to which chlorinated solvent DNAPLs can undergo (i.e. to the dissolved, vapor or sorbed phase), as well as their transformations (e.g. degradation), depend on the physico-chemical properties of the contaminants themselves and on features of the hydrogeological system. The main goal of the thesis is to provide new knowledge for the future investigations of sites contaminated by DNAPLs in alluvial settings, proposing innovative investigative approaches and emphasizing some of the key issues and main criticalities of this kind of contaminants in such a setting. To achieve this goal, the hydrogeologic setting below the city of Ferrara (Po plain, northern Italy), which is affected by scattered contamination by chlorinated solvents, has been investigated at different scales (regional and site specific), both from an intrinsic (i.e. groundwater flow systems) and specific (i.e. chlorinated solvent DNAPL behavior) point of view. Detailed investigations were carried out in particular in one selected test-site, known as “Caretti site”, where high-resolution vertical profiling of different kind of data were collected by means of multilevel monitoring systems and other innovative sampling and analytical techniques. This allowed to achieve a deep geological and hydrogeological knowledge of the system and to reconstruct in detail the architecture of contaminants in relationship to the features of the hosting porous medium. The results achieved in this thesis are useful not only at local scale, e.g. employable to interpret the origin of contamination in other sites of the Ferrara area, but also at global scale, in order to address future remediation and protection actions of similar hydrogeologic settings.
Resumo:
Il tema affrontato nella presente ricerca sono le trasformazioni intercorse nella vita quotidiana tra il III e il I secolo a.C. in due colonie latine, Ariminum e Bononia, attraverso le evidenze archeologiche. Vengono indagate su scala locale le conseguenze di un fenomeno di grande portata, la colonizzazione romano-latina, mettendo a fuoco le forme dell’abitare, le tradizioni artigianali e le pratiche alimentari. La principale base documentaria sono le testimonianze archeologiche di edilizia domestica e le ceramiche, rinvenute nelle aree di abitato di Rimini e Bologna e nei territori limitrofi. Per cogliere a pieno le trasformazioni intercorse, vengono passate in rassegna le principali caratteristiche del popolamento, dell'architettura domestica e delle ceramiche precedenti la colonizzazione romano-latina. Le due colonie, le abitazioni e le ceramiche sono considerate, inoltre, nel contesto territoriale più ampio, volgendo lo sguardo anche all'area medio-adriatica e alla Cispadana. Allo stesso tempo, sono continui i riferimenti all'Italia medio-tirrenica, poiché permettono di comprendere molte delle evidenze archeologiche e dei processi storici in esame. Il primo capitolo tratta della colonizzazione romano-latina, calata nelle realtà di Rimini e Bologna. La domanda a cui si vuole rispondere è: chi erano gli abitanti delle due colonie? A questo proposito, si affronta anche la questione degli insediamenti precoloniali. Nel secondo capitolo si analizzano le abitazioni urbane. Quali furono le principali innovazioni nell'architettura domestica introdotte dalla colonizzazione? Come cambiarono le forme dell’abitare ad Ariminum e Bononia in età repubblicana? Il terzo capitolo si concentra sulla ceramica per la preparazione e il consumo del cibo nei contesti di abitato. Come cambiarono nelle due città le pratiche alimentari e le tradizioni artigianali utilizzate nella produzione di ceramiche? L'ultimo capitolo discute alcuni quadri teorici applicati ai fenomeni descritti nei capitoli precedenti (romanizzazione, acculturazione, identità, globalizzazione). L'ultimo paragrafo entra nel merito delle trasformazioni avvenute nella vita quotidiana di Ariminum e Bononia.
Resumo:
This PhD thesis explores the ecological responses of bird species to glacial-interglacial transitions during the late Quaternary in the Western Palearctic, using multiple approaches and at different scales, enhancing the importance of the bird fossil record and quantitative methods to elucidate biotic trends in relation to long-term climate changes. The taxonomic and taphonomic analyses of the avian fossil assemblages from four Italian Middle and Upper Pleistocene sedimentary successions (Grotta del Cavallo, Grotta di Fumane, Grotta di Castelcivita, and Grotta di Uluzzo C) allowed us to reconstruct local-scale patterns in birds’ response to climate changes. These bird assemblages are characterized by the presence of temperate species and by the occasional presence of cold-dwelling species during glacials, related to range shifts. These local patterns are supported by those identified at the continental scale. In this respect, I mapped the present-day and LGM climatic envelopes of species with different climatic requirements. The results show a substantial stability in the range of temperate species and pronounced changes in the range of cold-dwelling species, supported by their fossil records. Therefore, the responses to climate oscillations are highly related to the thermal niches of investigated species. I also clarified the dynamics of the presence of boreal and arctic bird species in Mediterranean Europe, due to southern range shifts, during the glacial phases. After a reassessment of the reliability of the existing fossil evidence, I show that this phenomenon is not as common as previously thought, with important implications for the paleoclimatic and paleoenvironmental significance of the targeted species. I have also been able to explore the potential of multivariate and rarefaction methods in the analyses of avian fossils from Grotta del Cavallo. These approaches helped to delineate the main drivers of taphonomic damages and the dynamics of species diversity in relation to climate-driven paleoenvironmental changes.
Resumo:
With an increasing demand for rural resources and land, new challenges are approaching affecting and restructuring the European countryside. While creating opportunities for rural living, it has also opened a discussion on rural gentrification risks. The concept of rural gentrification encircles the influx of new residents leading to an economic upgrade of an area making it unaffordable for local inhabitants to stay in. Rural gentrification occurs in areas perceived as attractive. Paradoxically, in-migrants re-shape their surrounding landscape. Rural gentrification may not only cause displacement of people but also landscape values. Thus, this research aims to understand the twofold role of landscape in rural gentrification theory: as a possible driver to attract residents and as a product shaped by its residents. To understand the potential gentrifiers’ decision process, this research has provided a collection of drivers behind in-migration. Moreover, essential indicators of rural gentrification have been collected from previous studies. Yet, the available indicators do not contain measures to understand related landscape changes. To fill this gap, after analysing established landscape assessment methodologies, evaluating the relevance for assessing gentrification, a new Landscape Assessment approach is proposed. This method introduces a novel approach to capture landscape change caused by gentrification through a historical depth. The measures to study gentrification was applied on Gotland, Sweden. The study showed a population stagnating while the number of properties increased, and housing prices raised. These factors are not indicating positive growth but risks of gentrification. Then, the research applied the proposed Landscape Assessment method for areas exposed to gentrification. Results suggest that landscape change takes place on a local scale and could over time endanger key characteristics. The methodology contributes to a discussion on grasping nuances within the rural context. It has also proven useful for indicating accumulative changes, which is necessary in managing landscape values.
Resumo:
This thesis analyzes the impact of heat extremes in urban and rural environments, considering processes related to severely high temperatures and unusual dryness. The first part deals with the influence of large-scale heatwave events on the local-scale urban heat island (UHI) effect. The temperatures recorded over a 20-year summer period by meteorological stations in 37 European cities are examined to evaluate the variations of UHI during heatwaves with respect to non-heatwave days. A statistical analysis reveals a negligible impact of large-scale extreme temperatures on the local daytime urban climate, while a notable exacerbation of UHI effect at night. A comparison with the UrbClim model outputs confirms the UHI strengthening during heatwave episodes, with an intensity independent of the climate zone. The investigation of the relationship between large-scale temperature anomalies and UHI highlights a smooth and continuous dependence, but with a strong variability. The lack of a threshold behavior in this relationship suggests that large-scale temperature variability can affect the local-scale UHI even in different conditions than during extreme events. The second part examines the transition from meteorological to agricultural drought, being the first stage of the drought propagation process. A multi-year reanalysis dataset involving numerous drought events over the Iberian Peninsula is considered. The behavior of different non-parametric standardized drought indices in drought detection is evaluated. A statistical approach based on run theory is employed, analyzing the main characteristics of drought propagation. The propagation from meteorological to agricultural drought events is found to develop in about 1-2 months. The duration of agricultural drought appears shorter than that of meteorological drought, but the onset is delayed. The propagation probability increases with the severity of the originating meteorological drought. A new combined agricultural drought index is developed to be a useful tool for balancing the characteristics of other adopted indices.
Resumo:
This work presents hybrid Constraint Programming (CP) and metaheuristic methods for the solution of Large Scale Optimization Problems; it aims at integrating concepts and mechanisms from the metaheuristic methods to a CP-based tree search environment in order to exploit the advantages of both approaches. The modeling and solution of large scale combinatorial optimization problem is a topic which has arisen the interest of many researcherers in the Operations Research field; combinatorial optimization problems are widely spread in everyday life and the need of solving difficult problems is more and more urgent. Metaheuristic techniques have been developed in the last decades to effectively handle the approximate solution of combinatorial optimization problems; we will examine metaheuristics in detail, focusing on the common aspects of different techniques. Each metaheuristic approach possesses its own peculiarities in designing and guiding the solution process; our work aims at recognizing components which can be extracted from metaheuristic methods and re-used in different contexts. In particular we focus on the possibility of porting metaheuristic elements to constraint programming based environments, as constraint programming is able to deal with feasibility issues of optimization problems in a very effective manner. Moreover, CP offers a general paradigm which allows to easily model any type of problem and solve it with a problem-independent framework, differently from local search and metaheuristic methods which are highly problem specific. In this work we describe the implementation of the Local Branching framework, originally developed for Mixed Integer Programming, in a CP-based environment. Constraint programming specific features are used to ease the search process, still mantaining an absolute generality of the approach. We also propose a search strategy called Sliced Neighborhood Search, SNS, that iteratively explores slices of large neighborhoods of an incumbent solution by performing CP-based tree search and encloses concepts from metaheuristic techniques. SNS can be used as a stand alone search strategy, but it can alternatively be embedded in existing strategies as intensification and diversification mechanism. In particular we show its integration within the CP-based local branching. We provide an extensive experimental evaluation of the proposed approaches on instances of the Asymmetric Traveling Salesman Problem and of the Asymmetric Traveling Salesman Problem with Time Windows. The proposed approaches achieve good results on practical size problem, thus demonstrating the benefit of integrating metaheuristic concepts in CP-based frameworks.
Resumo:
Flood disasters are a major cause of fatalities and economic losses, and several studies indicate that global flood risk is currently increasing. In order to reduce and mitigate the impact of river flood disasters, the current trend is to integrate existing structural defences with non structural measures. This calls for a wider application of advanced hydraulic models for flood hazard and risk mapping, engineering design, and flood forecasting systems. Within this framework, two different hydraulic models for large scale analysis of flood events have been developed. The two models, named CA2D and IFD-GGA, adopt an integrated approach based on the diffusive shallow water equations and a simplified finite volume scheme. The models are also designed for massive code parallelization, which has a key importance in reducing run times in large scale and high-detail applications. The two models were first applied to several numerical cases, to test the reliability and accuracy of different model versions. Then, the most effective versions were applied to different real flood events and flood scenarios. The IFD-GGA model showed serious problems that prevented further applications. On the contrary, the CA2D model proved to be fast and robust, and able to reproduce 1D and 2D flow processes in terms of water depth and velocity. In most applications the accuracy of model results was good and adequate to large scale analysis. Where complex flow processes occurred local errors were observed, due to the model approximations. However, they did not compromise the correct representation of overall flow processes. In conclusion, the CA model can be a valuable tool for the simulation of a wide range of flood event types, including lowland and flash flood events.
Resumo:
This thesis is devoted to the study of the properties of high-redsfhit galaxies in the epoch 1 < z < 3, when a substantial fraction of galaxy mass was assembled, and when the evolution of the star-formation rate density peaked. Following a multi-perspective approach and using the most recent and high-quality data available (spectra, photometry and imaging), the morphologies and the star-formation properties of high-redsfhit galaxies were investigated. Through an accurate morphological analyses, the built up of the Hubble sequence was placed around z ~ 2.5. High-redshift galaxies appear, in general, much more irregular and asymmetric than local ones. Moreover, the occurrence of morphological k-correction is less pronounced than in the local Universe. Different star-formation rate indicators were also studied. The comparison of ultra-violet and optical based estimates, with the values derived from infra-red luminosity showed that the traditional way of addressing the dust obscuration is problematic, at high-redshifts, and new models of dust geometry and composition are required. Finally, by means of stacking techniques applied to rest-frame ultra-violet spectra of star-forming galaxies at z~2, the warm phase of galactic-scale outflows was studied. Evidence was found of escaping gas at velocities of ~ 100 km/s. Studying the correlation of inter-stellar absorption lines equivalent widths with galaxy physical properties, the intensity of the outflow-related spectral features was proven to depend strongly on a combination of the velocity dispersion of the gas and its geometry.
Resumo:
The thesis is concerned with local trigonometric regression methods. The aim was to develop a method for extraction of cyclical components in time series. The main results of the thesis are the following. First, a generalization of the filter proposed by Christiano and Fitzgerald is furnished for the smoothing of ARIMA(p,d,q) process. Second, a local trigonometric filter is built, with its statistical properties. Third, they are discussed the convergence properties of trigonometric estimators, and the problem of choosing the order of the model. A large scale simulation experiment has been designed in order to assess the performance of the proposed models and methods. The results show that local trigonometric regression may be a useful tool for periodic time series analysis.
Resumo:
Several decision and control tasks in cyber-physical networks can be formulated as large- scale optimization problems with coupling constraints. In these "constraint-coupled" problems, each agent is associated to a local decision variable, subject to individual constraints. This thesis explores the use of primal decomposition techniques to develop tailored distributed algorithms for this challenging set-up over graphs. We first develop a distributed scheme for convex problems over random time-varying graphs with non-uniform edge probabilities. The approach is then extended to unknown cost functions estimated online. Subsequently, we consider Mixed-Integer Linear Programs (MILPs), which are of great interest in smart grid control and cooperative robotics. We propose a distributed methodological framework to compute a feasible solution to the original MILP, with guaranteed suboptimality bounds, and extend it to general nonconvex problems. Monte Carlo simulations highlight that the approach represents a substantial breakthrough with respect to the state of the art, thus representing a valuable solution for new toolboxes addressing large-scale MILPs. We then propose a distributed Benders decomposition algorithm for asynchronous unreliable networks. The framework has been then used as starting point to develop distributed methodologies for a microgrid optimal control scenario. We develop an ad-hoc distributed strategy for a stochastic set-up with renewable energy sources, and show a case study with samples generated using Generative Adversarial Networks (GANs). We then introduce a software toolbox named ChoiRbot, based on the novel Robot Operating System 2, and show how it facilitates simulations and experiments in distributed multi-robot scenarios. Finally, we consider a Pickup-and-Delivery Vehicle Routing Problem for which we design a distributed method inspired to the approach of general MILPs, and show the efficacy through simulations and experiments in ChoiRbot with ground and aerial robots.
Resumo:
The correlations between the evolution of the Super Massive Black Holes (SMBHs) and their host galaxies suggests that the SMBH accretion on sub-pc scales (active galactice nuclei, AGN) is linked to the building of the galaxy over kpc scales, through the so called AGN feedback. Most of the galaxy assembly occurs in overdense large scale structures (LSSs). AGN residing in powerful sources in LSSs, such as the proto-brightest cluster galaxies (BCGs), can affect the evolution of the surrounding intra-cluster medium (ICM) and nearby galaxies. Among distant AGN, high-redshift radio-galaxies (HzRGs) are found to be excellent BCG progenitor candidates. In this Thesis we analyze novel interferometric observations of the so-called "J1030" field centered around the z = 6.3 SDSS Quasar J1030+0524, carried out with the Atacama large (sub-)millimetre array (ALMA) and the Jansky very large array (JVLA). This field host a LSS assembling around a powerful HzRG at z = 1.7 that shows evidence of positive AGN feedback in heating the surrounding ICM and promoting star-formation in multiple galaxies at hundreds kpc distances. We report the detection of gas-rich members of the LSS, including the HzRG. We showed that the LSS is going to evolve into a local massive cluster and the HzRG is the proto-BCG. we unveiled signatures of the proto-BCG's interaction with the surrounding ICM, strengthening the positive AGN feedback scenario. From the JVLA observations of the "J1030" we extracted one of the deepest extra-galactic radio surveys to date (~12.5 uJy at 5 sigma). Exploiting the synergy with the X-ray deep survey (~500 ks) we investigated the relation of the X-ray/radio emission of a X-ray-selected sample, unveiling that the radio emission is powered by different processes (star-formation and AGN), and that AGN-driven sample is mostly composed by radio-quiet objects that display a significant X-ray/radio correlation.
Resumo:
The coastal ocean is a complex environment with extremely dynamic processes that require a high-resolution and cross-scale modeling approach in which all hydrodynamic fields and scales are considered integral parts of the overall system. In the last decade, unstructured-grid models have been used to advance in seamless modeling between scales. On the other hand, the data assimilation methodologies to improve the unstructured-grid models in the coastal seas have been developed only recently and need significant advancements. Here, we link the unstructured-grid ocean modeling to the variational data assimilation methods. In particular, we show results from the modeling system SANIFS based on SHYFEM fully-baroclinic unstructured-grid model interfaced with OceanVar, a state-of-art variational data assimilation scheme adopted for several systems based on a structured grid. OceanVar implements a 3DVar DA scheme. The combination of three linear operators models the background error covariance matrix. The vertical part is represented using multivariate EOFs for temperature, salinity, and sea level anomaly. The horizontal part is assumed to be Gaussian isotropic and is modeled using a first-order recursive filter algorithm designed for structured and regular grids. Here we introduced a novel recursive filter algorithm for unstructured grids. A local hydrostatic adjustment scheme models the rapidly evolving part of the background error covariance. We designed two data assimilation experiments using SANIFS implementation interfaced with OceanVar over the period 2017-2018, one with only temperature and salinity assimilation by Argo profiles and the second also including sea level anomaly. The results showed a successful implementation of the approach and the added value of the assimilation for the active tracer fields. While looking at the broad basin, no significant improvements are highlighted for the sea level, requiring future investigations. Furthermore, a Machine Learning methodology based on an LSTM network has been used to predict the model SST increments.