873 resultados para Multi-scale modeling
Resumo:
In the present work, a multi physics simulation of an innovative safety system for light water nuclear reactor is performed, with the aim to increase the reliability of its main decay heat removal system. The system studied, denoted by the acronym PERSEO (in Pool Energy Removal System for Emergency Operation) is able to remove the decay power from the primary side of the light water nuclear reactor through a heat suppression pool. The experimental facility, located at SIET laboratories (PIACENZA), is an evolution of the Thermal Valve concept where the triggering valve is installed liquid side, on a line connecting two pools at the bottom. During the normal operation, the valve is closed, while in emergency conditions it opens, the heat exchanger is flooded with consequent heat transfer from the primary side to the pool side. In order to verify the correct system behavior during long term accidental transient, two main experimental PERSEO tests are analyzed. For this purpose, a coupling between the mono dimensional system code CATHARE, which reproduces the system scale behavior, with a three-dimensional CFD code NEPTUNE CFD, allowing a full investigation of the pools and the injector, is implemented. The coupling between the two codes is realized through the boundary conditions. In a first analysis, the facility is simulated by the system code CATHARE V2.5 to validate the results with the experimental data. The comparison of the numerical results obtained shows a different void distribution during the boiling conditions inside the heat suppression pool for the two cases of single nodalization and three volume nodalization scheme of the pool. Finaly, to improve the investigation capability of the void distribution inside the pool and the temperature stratification phenomena below the injector, a two and three dimensional CFD models with a simplified geometry of the system are adopted.
Resumo:
Bioinformatics, in the last few decades, has played a fundamental role to give sense to the huge amount of data produced. Obtained the complete sequence of a genome, the major problem of knowing as much as possible of its coding regions, is crucial. Protein sequence annotation is challenging and, due to the size of the problem, only computational approaches can provide a feasible solution. As it has been recently pointed out by the Critical Assessment of Function Annotations (CAFA), most accurate methods are those based on the transfer-by-homology approach and the most incisive contribution is given by cross-genome comparisons. In the present thesis it is described a non-hierarchical sequence clustering method for protein automatic large-scale annotation, called “The Bologna Annotation Resource Plus” (BAR+). The method is based on an all-against-all alignment of more than 13 millions protein sequences characterized by a very stringent metric. BAR+ can safely transfer functional features (Gene Ontology and Pfam terms) inside clusters by means of a statistical validation, even in the case of multi-domain proteins. Within BAR+ clusters it is also possible to transfer the three dimensional structure (when a template is available). This is possible by the way of cluster-specific HMM profiles that can be used to calculate reliable template-to-target alignments even in the case of distantly related proteins (sequence identity < 30%). Other BAR+ based applications have been developed during my doctorate including the prediction of Magnesium binding sites in human proteins, the ABC transporters superfamily classification and the functional prediction (GO terms) of the CAFA targets. Remarkably, in the CAFA assessment, BAR+ placed among the ten most accurate methods. At present, as a web server for the functional and structural protein sequence annotation, BAR+ is freely available at http://bar.biocomp.unibo.it/bar2.0.
Resumo:
Microalgae cultures are attracting great attentions in many industrial applications. However, one of the technical challenges is to cut down the capital and operational costs of microalgae production systems, with special difficulty in reactor design and scale-up. The thesis work open with an overview on the microalgae cultures as a possible answer to solve some of the upcoming planet issues and their applications in several fields. After the work offers a general outline on the state of the art of microalgae culture systems, taking a special look to the enclosed photobioreactors (PBRs). The overall objective of this study is to advance the knowledge of PBRs design and lead to innovative large scale processes of microalgae cultivation. An airlift flat panel photobioreactor was designed, modeled and experimentally characterized. The gas holdup, liquid flow velocity and oxygen mass transfer of the reactor were experimentally determined and mathematically modeled, and the performance of the reactor was tested by cultivation of microalgae. The model predicted data correlated well with experimental data, and the high concentration of suspension cell culture could be achieved with controlled conditions. The reactor was inoculated with the algal strain Scenedesmus obliquus sp. first and with Chlorella sp. later and sparged with air. The reactor was operated in batch mode and daily monitored for pH, temperature, and biomass concentration and activity. The productivity of the novel device was determined, suggesting the proposed design can be effectively and economically used in carbon dioxide mitigation technologies and in the production of algal biomass for biofuel and other bioproducts. Those research results favored the possibility of scaling the reactor up into industrial scales based on the models employed, and the potential advantages and disadvantages were discussed for this novel industrial design.
Resumo:
Dealing with latent constructs (loaded by reflective and congeneric measures) cross-culturally compared means studying how these unobserved variables vary, and/or covary each other, after controlling for possibly disturbing cultural forces. This yields to the so-called ‘measurement invariance’ matter that refers to the extent to which data collected by the same multi-item measurement instrument (i.e., self-reported questionnaire of items underlying common latent constructs) are comparable across different cultural environments. As a matter of fact, it would be unthinkable exploring latent variables heterogeneity (e.g., latent means; latent levels of deviations from the means (i.e., latent variances), latent levels of shared variation from the respective means (i.e., latent covariances), levels of magnitude of structural path coefficients with regard to causal relations among latent variables) across different populations without controlling for cultural bias in the underlying measures. Furthermore, it would be unrealistic to assess this latter correction without using a framework that is able to take into account all these potential cultural biases across populations simultaneously. Since the real world ‘acts’ in a simultaneous way as well. As a consequence, I, as researcher, may want to control for cultural forces hypothesizing they are all acting at the same time throughout groups of comparison and therefore examining if they are inflating or suppressing my new estimations with hierarchical nested constraints on the original estimated parameters. Multi Sample Structural Equation Modeling-based Confirmatory Factor Analysis (MS-SEM-based CFA) still represents a dominant and flexible statistical framework to work out this potential cultural bias in a simultaneous way. With this dissertation I wanted to make an attempt to introduce new viewpoints on measurement invariance handled under covariance-based SEM framework by means of a consumer behavior modeling application on functional food choices.
Resumo:
Atmospheric aerosol particles serving as cloud condensation nuclei (CCN) are key elements of the hydrological cycle and climate. Knowledge of the spatial and temporal distribution of CCN in the atmosphere is essential to understand and describe the effects of aerosols in meteorological models. In this study, CCN properties were measured in polluted and pristine air of different continental regions, and the results were parameterized for efficient prediction of CCN concentrations.The continuous-flow CCN counter used for size-resolved measurements of CCN efficiency spectra (activation curves) was calibrated with ammonium sulfate and sodium chloride aerosols for a wide range of water vapor supersaturations (S=0.068% to 1.27%). A comprehensive uncertainty analysis showed that the instrument calibration depends strongly on the applied particle generation techniques, Köhler model calculations, and water activity parameterizations (relative deviations in S up to 25%). Laboratory experiments and a comparison with other CCN instruments confirmed the high accuracy and precision of the calibration and measurement procedures developed and applied in this study.The mean CCN number concentrations (NCCN,S) observed in polluted mega-city air and biomass burning smoke (Beijing and Pearl River Delta, China) ranged from 1000 cm−3 at S=0.068% to 16 000 cm−3 at S=1.27%, which is about two orders of magnitude higher than in pristine air at remote continental sites (Swiss Alps, Amazonian rainforest). Effective average hygroscopicity parameters, κ, describing the influence of chemical composition on the CCN activity of aerosol particles were derived from the measurement data. They varied in the range of 0.3±0.2, were size-dependent, and could be parameterized as a function of organic and inorganic aerosol mass fraction. At low S (≤0.27%), substantial portions of externally mixed CCN-inactive particles with much lower hygroscopicity were observed in polluted air (fresh soot particles with κ≈0.01). Thus, the aerosol particle mixing state needs to be known for highly accurate predictions of NCCN,S. Nevertheless, the observed CCN number concentrations could be efficiently approximated using measured aerosol particle number size distributions and a simple κ-Köhler model with a single proxy for the effective average particle hygroscopicity. The relative deviations between observations and model predictions were on average less than 20% when a constant average value of κ=0.3 was used in conjunction with variable size distribution data. With a constant average size distribution, however, the deviations increased up to 100% and more. The measurement and model results demonstrate that the aerosol particle number and size are the major predictors for the variability of the CCN concentration in continental boundary layer air, followed by particle composition and hygroscopicity as relatively minor modulators. Depending on the required and applicable level of detail, the measurement results and parameterizations presented in this study can be directly implemented in detailed process models as well as in large-scale atmospheric and climate models for efficient description of the CCN activity of atmospheric aerosols.
Resumo:
Die Entstehung eines Marktpreises für einen Vermögenswert kann als Superposition der einzelnen Aktionen der Marktteilnehmer aufgefasst werden, die damit kumulativ Angebot und Nachfrage erzeugen. Dies ist in der statistischen Physik mit der Entstehung makroskopischer Eigenschaften vergleichbar, die von mikroskopischen Wechselwirkungen zwischen den beteiligten Systemkomponenten hervorgerufen werden. Die Verteilung der Preisänderungen an Finanzmärkten unterscheidet sich deutlich von einer Gaußverteilung. Dies führt zu empirischen Besonderheiten des Preisprozesses, zu denen neben dem Skalierungsverhalten nicht-triviale Korrelationsfunktionen und zeitlich gehäufte Volatilität zählen. In der vorliegenden Arbeit liegt der Fokus auf der Analyse von Finanzmarktzeitreihen und den darin enthaltenen Korrelationen. Es wird ein neues Verfahren zur Quantifizierung von Muster-basierten komplexen Korrelationen einer Zeitreihe entwickelt. Mit dieser Methodik werden signifikante Anzeichen dafür gefunden, dass sich typische Verhaltensmuster von Finanzmarktteilnehmern auf kurzen Zeitskalen manifestieren, dass also die Reaktion auf einen gegebenen Preisverlauf nicht rein zufällig ist, sondern vielmehr ähnliche Preisverläufe auch ähnliche Reaktionen hervorrufen. Ausgehend von der Untersuchung der komplexen Korrelationen in Finanzmarktzeitreihen wird die Frage behandelt, welche Eigenschaften sich beim Wechsel von einem positiven Trend zu einem negativen Trend verändern. Eine empirische Quantifizierung mittels Reskalierung liefert das Resultat, dass unabhängig von der betrachteten Zeitskala neue Preisextrema mit einem Anstieg des Transaktionsvolumens und einer Reduktion der Zeitintervalle zwischen Transaktionen einhergehen. Diese Abhängigkeiten weisen Charakteristika auf, die man auch in anderen komplexen Systemen in der Natur und speziell in physikalischen Systemen vorfindet. Über 9 Größenordnungen in der Zeit sind diese Eigenschaften auch unabhängig vom analysierten Markt - Trends, die nur für Sekunden bestehen, zeigen die gleiche Charakteristik wie Trends auf Zeitskalen von Monaten. Dies eröffnet die Möglichkeit, mehr über Finanzmarktblasen und deren Zusammenbrüche zu lernen, da Trends auf kleinen Zeitskalen viel häufiger auftreten. Zusätzlich wird eine Monte Carlo-basierte Simulation des Finanzmarktes analysiert und erweitert, um die empirischen Eigenschaften zu reproduzieren und Einblicke in deren Ursachen zu erhalten, die zum einen in der Finanzmarktmikrostruktur und andererseits in der Risikoaversion der Handelsteilnehmer zu suchen sind. Für die rechenzeitintensiven Verfahren kann mittels Parallelisierung auf einer Graphikkartenarchitektur eine deutliche Rechenzeitreduktion erreicht werden. Um das weite Spektrum an Einsatzbereichen von Graphikkarten zu aufzuzeigen, wird auch ein Standardmodell der statistischen Physik - das Ising-Modell - auf die Graphikkarte mit signifikanten Laufzeitvorteilen portiert. Teilresultate der Arbeit sind publiziert in [PGPS07, PPS08, Pre11, PVPS09b, PVPS09a, PS09, PS10a, SBF+10, BVP10, Pre10, PS10b, PSS10, SBF+11, PB10].
Resumo:
Workaholism is defined as the combination of two underlying dimensions: working excessively and working compulsively. The present thesis aims at achieving the following purposes: 1) to test whether the interaction between environmental and personal antecedents may enhance workaholism; 2) to develop a questionnaire aimed to assess overwork climate in the workplace; 3) to contrast focal employees’ and coworkers’ perceptions of employees’ workaholism and engagement. Concerning the first purpose, the interaction between overwork climate and person characteristics (achievement motivation, perfectionism, conscientiousness, self-efficacy) was explored on a sample of 333 Dutch employees. The results of moderated regression analyses showed that the interaction between overwork climate and person characteristics is related to workaholism. The second purpose was pursued with two interrelated studies. In Study 1 the Overwork Climate Scale (OWCS) was developed and tested using a principal component analysis (N = 395) and a confirmatory factor analysis (N = 396). Two overwork climate dimensions were distinguished, overwork endorsement and lacking overwork rewards. In Study 2 the total sample (N = 791) was used to explore the association of overwork climate with two types of working hard: work engagement and workaholism. Lacking overwork rewards was negatively associated with engagement, whereas overwork endorsement showed a positive association with workaholism. Concerning the third purpose, using a sample of 73 dyads composed by focal employees and their coworkers, a multitrait-multimethod matrix and a correlated trait-correlated method model, i.e. the CT-C(M–1) model, were examined. Our results showed a considerable agreement between raters on focal employees' engagement and workaholism. In contrast, we observed a significant difference concerning the cognitive dimension of workaholism, working compulsively. Moreover, we provided further evidence for the discriminant validity between engagement and workaholism. Overall, workaholism appears as a negative work-related state that could be better explained by assuming a multi-causal and multi-rater approach.
Resumo:
The wide diffusion of cheap, small, and portable sensors integrated in an unprecedented large variety of devices and the availability of almost ubiquitous Internet connectivity make it possible to collect an unprecedented amount of real time information about the environment we live in. These data streams, if properly and timely analyzed, can be exploited to build new intelligent and pervasive services that have the potential of improving people's quality of life in a variety of cross concerning domains such as entertainment, health-care, or energy management. The large heterogeneity of application domains, however, calls for a middleware-level infrastructure that can effectively support their different quality requirements. In this thesis we study the challenges related to the provisioning of differentiated quality-of-service (QoS) during the processing of data streams produced in pervasive environments. We analyze the trade-offs between guaranteed quality, cost, and scalability in streams distribution and processing by surveying existing state-of-the-art solutions and identifying and exploring their weaknesses. We propose an original model for QoS-centric distributed stream processing in data centers and we present Quasit, its prototype implementation offering a scalable and extensible platform that can be used by researchers to implement and validate novel QoS-enforcement mechanisms. To support our study, we also explore an original class of weaker quality guarantees that can reduce costs when application semantics do not require strict quality enforcement. We validate the effectiveness of this idea in a practical use-case scenario that investigates partial fault-tolerance policies in stream processing by performing a large experimental study on the prototype of our novel LAAR dynamic replication technique. Our modeling, prototyping, and experimental work demonstrates that, by providing data distribution and processing middleware with application-level knowledge of the different quality requirements associated to different pervasive data flows, it is possible to improve system scalability while reducing costs.
Resumo:
This is a research B for the University of Bologna. The course is the civil engineering LAUREA MAGISTRALE at UNIBO. The main purpose of this research is to promote another way of explaining, analyzing and presenting some civil engineering aspects to the students worldwide by theory, modeling and photos. The basic idea is divided into three steps. The first one is to present and analyze the theoretical parts. So a detailed analysis of the theory combined with theorems, explanations, examples and exercises will cover this step. At the second, a model will make clear all these parts that were discussed in the theory by showing how the structures work or fail. The modeling is able to present the behavior of many elements, in scale which we use in the real structures. After these two steps an interesting exhibition of photos from the real world with comments will give the chance to the engineers to observe all these theoretical and modeling-laboratory staff in many different cases. For example many civil engineers in the world may know about the air pressure on the structures but many of them have never seen the extraordinary behavior of the bridge of Tacoma ‘dancing with the air’. At this point I would like to say that what I have done is not a book, but a research of how this ‘3 step’ presentation or explanation of some mechanical characteristics could be helpful. I know that my research is something different and new and in my opinion is very important because it helps students to go deeper in the science and also gives new ideas and inspirations. This way of teaching can be used at all lessons especially at the technical. Hope that one day all the books will adopt this kind of presentation.
Resumo:
Chlorinated solvents are the most ubiquitous organic contaminants found in groundwater since the last five decades. They generally reach groundwater as Dense Non-Aqueous Phase Liquid (DNAPL). This phase can migrate through aquifers, and also through aquitards, in ways that aqueous contaminants cannot. The complex phase partitioning to which chlorinated solvent DNAPLs can undergo (i.e. to the dissolved, vapor or sorbed phase), as well as their transformations (e.g. degradation), depend on the physico-chemical properties of the contaminants themselves and on features of the hydrogeological system. The main goal of the thesis is to provide new knowledge for the future investigations of sites contaminated by DNAPLs in alluvial settings, proposing innovative investigative approaches and emphasizing some of the key issues and main criticalities of this kind of contaminants in such a setting. To achieve this goal, the hydrogeologic setting below the city of Ferrara (Po plain, northern Italy), which is affected by scattered contamination by chlorinated solvents, has been investigated at different scales (regional and site specific), both from an intrinsic (i.e. groundwater flow systems) and specific (i.e. chlorinated solvent DNAPL behavior) point of view. Detailed investigations were carried out in particular in one selected test-site, known as “Caretti site”, where high-resolution vertical profiling of different kind of data were collected by means of multilevel monitoring systems and other innovative sampling and analytical techniques. This allowed to achieve a deep geological and hydrogeological knowledge of the system and to reconstruct in detail the architecture of contaminants in relationship to the features of the hosting porous medium. The results achieved in this thesis are useful not only at local scale, e.g. employable to interpret the origin of contamination in other sites of the Ferrara area, but also at global scale, in order to address future remediation and protection actions of similar hydrogeologic settings.
Resumo:
During the last few decades an unprecedented technological growth has been at the center of the embedded systems design paramount, with Moore’s Law being the leading factor of this trend. Today in fact an ever increasing number of cores can be integrated on the same die, marking the transition from state-of-the-art multi-core chips to the new many-core design paradigm. Despite the extraordinarily high computing power, the complexity of many-core chips opens the door to several challenges. As a result of the increased silicon density of modern Systems-on-a-Chip (SoC), the design space exploration needed to find the best design has exploded and hardware designers are in fact facing the problem of a huge design space. Virtual Platforms have always been used to enable hardware-software co-design, but today they are facing with the huge complexity of both hardware and software systems. In this thesis two different research works on Virtual Platforms are presented: the first one is intended for the hardware developer, to easily allow complex cycle accurate simulations of many-core SoCs. The second work exploits the parallel computing power of off-the-shelf General Purpose Graphics Processing Units (GPGPUs), with the goal of an increased simulation speed. The term Virtualization can be used in the context of many-core systems not only to refer to the aforementioned hardware emulation tools (Virtual Platforms), but also for two other main purposes: 1) to help the programmer to achieve the maximum possible performance of an application, by hiding the complexity of the underlying hardware. 2) to efficiently exploit the high parallel hardware of many-core chips in environments with multiple active Virtual Machines. This thesis is focused on virtualization techniques with the goal to mitigate, and overtake when possible, some of the challenges introduced by the many-core design paradigm.
Resumo:
In dieser Arbeit wurden Simulation von Flüssigkeiten auf molekularer Ebene durchgeführt, wobei unterschiedliche Multi-Skalen Techniken verwendet wurden. Diese erlauben eine effektive Beschreibung der Flüssigkeit, die weniger Rechenzeit im Computer benötigt und somit Phänomene auf längeren Zeit- und Längenskalen beschreiben kann.rnrnEin wesentlicher Aspekt ist dabei ein vereinfachtes (“coarse-grained”) Modell, welches in einem systematischen Verfahren aus Simulationen des detaillierten Modells gewonnen wird. Dabei werden ausgewählte Eigenschaften des detaillierten Modells (z.B. Paar-Korrelationsfunktion, Druck, etc) reproduziert.rnrnEs wurden Algorithmen untersucht, die eine gleichzeitige Kopplung von detaillierten und vereinfachten Modell erlauben (“Adaptive Resolution Scheme”, AdResS). Dabei wird das detaillierte Modell in einem vordefinierten Teilvolumen der Flüssigkeit (z.B. nahe einer Oberfläche) verwendet, während der Rest mithilfe des vereinfachten Modells beschrieben wird.rnrnHierzu wurde eine Methode (“Thermodynamische Kraft”) entwickelt um die Kopplung auch dann zu ermöglichen, wenn die Modelle in verschiedenen thermodynamischen Zuständen befinden. Zudem wurde ein neuartiger Algorithmus der Kopplung beschrieben (H-AdResS) der die Kopplung mittels einer Hamilton-Funktion beschreibt. In diesem Algorithmus ist eine zur Thermodynamischen Kraft analoge Korrektur mit weniger Rechenaufwand möglich.rnrnAls Anwendung dieser grundlegenden Techniken wurden Pfadintegral Molekulardynamik (MD) Simulationen von Wasser untersucht. Mithilfe dieser Methode ist es möglich, quantenmechanische Effekte der Kerne (Delokalisation, Nullpunktsenergie) in die Simulation einzubeziehen. Hierbei wurde zuerst eine Multi-Skalen Technik (“Force-matching”) verwendet um eine effektive Wechselwirkung aus einer detaillierten Simulation auf Basis der Dichtefunktionaltheorie zu extrahieren. Die Pfadintegral MD Simulation verbessert die Beschreibung der intra-molekularen Struktur im Vergleich mit experimentellen Daten. Das Modell eignet sich auch zur gleichzeitigen Kopplung in einer Simulation, wobei ein Wassermolekül (beschrieben durch 48 Punktteilchen im Pfadintegral-MD Modell) mit einem vereinfachten Modell (ein Punktteilchen) gekoppelt wird. Auf diese Weise konnte eine Wasser-Vakuum Grenzfläche simuliert werden, wobei nur die Oberfläche im Pfadintegral Modell und der Rest im vereinfachten Modell beschrieben wird.
Resumo:
In this thesis different approaches for the modeling and simulation of the blood protein fibrinogen are presented. The approaches are meant to systematically connect the multiple time and length scales involved in the dynamics of fibrinogen in solution and at inorganic surfaces. The first part of the thesis will cover simulations of fibrinogen on an all atom level. Simulations of the fibrinogen protomer and dimer are performed in explicit solvent to characterize the dynamics of fibrinogen in solution. These simulations reveal an unexpectedly large and fast bending motion that is facilitated by molecular hinges located in the coiled-coil region of fibrinogen. This behavior is characterized by a bending and a dihedral angle and the distribution of these angles is measured. As a consequence of the atomistic detail of the simulations it is possible to illuminate small scale behavior in the binding pockets of fibrinogen that hints at a previously unknown allosteric effect. In a second step atomistic simulations of the fibrinogen protomer are performed at graphite and mica surfaces to investigate initial adsorption stages. These simulations highlight the different adsorption mechanisms at the hydrophobic graphite surface and the charged, hydrophilic mica surface. It is found that the initial adsorption happens in a preferred orientation on mica. Many effects of practical interest involve aggregates of many fibrinogen molecules. To investigate such systems, time and length scales need to be simulated that are not attainable in atomistic simulations. It is therefore necessary to develop lower resolution models of fibrinogen. This is done in the second part of the thesis. First a systematically coarse grained model is derived and parametrized based on the atomistic simulations of the first part. In this model the fibrinogen molecule is represented by 45 beads instead of nearly 31,000 atoms. The intra-molecular interactions of the beads are modeled as a heterogeneous elastic network while inter-molecular interactions are assumed to be a combination of electrostatic and van der Waals interaction. A method is presented that determines the charges assigned to beads by matching the electrostatic potential in the atomistic simulation. Lastly a phenomenological model is developed that represents fibrinogen by five beads connected by rigid rods with two hinges. This model only captures the large scale dynamics in the atomistic simulations but can shed light on experimental observations of fibrinogen conformations at inorganic surfaces.
Resumo:
Sub-grid scale (SGS) models are required in order to model the influence of the unresolved small scales on the resolved scales in large-eddy simulations (LES), the flow at the smallest scales of turbulence. In the following work two SGS models are presented and deeply analyzed in terms of accuracy through several LESs with different spatial resolutions, i.e. grid spacings. The first part of this thesis focuses on the basic theory of turbulence, the governing equations of fluid dynamics and their adaptation to LES. Furthermore, two important SGS models are presented: one is the Dynamic eddy-viscosity model (DEVM), developed by \cite{germano1991dynamic}, while the other is the Explicit Algebraic SGS model (EASSM), by \cite{marstorp2009explicit}. In addition, some details about the implementation of the EASSM in a Pseudo-Spectral Navier-Stokes code \cite{chevalier2007simson} are presented. The performance of the two aforementioned models will be investigated in the following chapters, by means of LES of a channel flow, with friction Reynolds numbers $Re_\tau=590$ up to $Re_\tau=5200$, with relatively coarse resolutions. Data from each simulation will be compared to baseline DNS data. Results have shown that, in contrast to the DEVM, the EASSM has promising potentials for flow predictions at high friction Reynolds numbers: the higher the friction Reynolds number is the better the EASSM will behave and the worse the performances of the DEVM will be. The better performance of the EASSM is contributed to the ability to capture flow anisotropy at the small scales through a correct formulation for the SGS stresses. Moreover, a considerable reduction in the required computational resources can be achieved using the EASSM compared to DEVM. Therefore, the EASSM combines accuracy and computational efficiency, implying that it has a clear potential for industrial CFD usage.
Resumo:
Global climate change in recent decades has strongly influenced the Arctic generating pronounced warming accompanied by significant reduction of sea ice in seasonally ice-covered seas and a dramatic increase of open water regions exposed to wind [Stephenson et al., 2011]. By strongly scattering the wave energy, thick multiyear ice prevents swell from penetrating deeply into the Arctic pack ice. However, with the recent changes affecting Arctic sea ice, waves gain more energy from the extended fetch and can therefore penetrate further into the pack ice. Arctic sea ice also appears weaker during melt season, extending the transition zone between thick multi-year ice and the open ocean. This region is called the Marginal Ice Zone (MIZ). In the Arctic, the MIZ is mainly encountered in the marginal seas, such as the Nordic Seas, the Barents Sea, the Beaufort Sea and the Labrador Sea. Formed by numerous blocks of sea ice of various diameters (floes) the MIZ, under certain conditions, allows maritime transportation stimulating dreams of industrial and touristic exploitation of these regions and possibly allowing, in the next future, a maritime connection between the Atlantic and the Pacific. With the increasing human presence in the Arctic, waves pose security and safety issues. As marginal seas are targeted for oil and gas exploitation, understanding and predicting ocean waves and their effects on sea ice become crucial for structure design and for real time safety of operations. The juxtaposition of waves and sea ice represents a risk for personnel and equipment deployed on ice, and may complicate critical operations such as platform evacuations. The risk is difficult to evaluate because there are no long-term observations of waves in ice, swell events are difficult to predict from local conditions, ice breakup can occur on very short time-scales and wave-ice interactions are beyond the scope of current forecasting models [Liu and Mollo-Christensen, 1988,Marko, 2003]. In this thesis, a newly developed Waves in Ice Model (WIM) [Williams et al., 2013a,Williams et al., 2013b] and its related Ocean and Sea Ice model (OSIM) will be used to study the MIZ and the improvements of wave modeling in ice infested waters. The following work has been conducted in collaboration with the Nansen Environmental and Remote Sensing Center and within the SWARP project which aims to extend operational services supporting human activity in the Arctic by including forecast of waves in ice-covered seas, forecast of sea-ice in the presence of waves and remote sensing of both waves and sea ice conditions. The WIM will be included in the downstream forecasting services provided by Copernicus marine environment monitoring service.