936 resultados para Two-Dimensional Search Problem
Resumo:
We study the growth of a tissue construct in a perfusion bioreactor, focussing on its response to the mechanical environment. The bioreactor system is modelled as a two-dimensional channel containing a tissue construct through which a flow of culture medium is driven. We employ a multiphase formulation of the type presented by G. Lemon, J. King, H. Byrne, O. Jensen and K. Shakesheff in their study (Multiphase modelling of tissue growth using the theory of mixtures. J. Math. Biol. 52(2), 2006, 571–594) restricted to two interacting fluid phases, representing a cell population (and attendant extracellular matrix) and a culture medium, and employ the simplifying limit of large interphase viscous drag after S. Franks in her study (Mathematical Modelling of Tumour Growth and Stability. Ph.D. Thesis, University of Nottingham, UK, 2002) and S. Franks and J. King in their study Interactions between a uniformly proliferating tumour and its surrounding: Uniform material properties. Math. Med. Biol. 20, 2003, 47–89). The novel aspects of this study are: (i) the investigation of the effect of an imposed flow on the growth of the tissue construct, and (ii) the inclusion of a chanotransduction mechanism regulating the response of the cells to the local mechanical environment. Specifically, we consider the response of the cells to their local density and the culture medium pressure. As such, this study forms the first step towards a general multiphase formulation that incorporates the effect of mechanotransduction on the growth and morphology of a tissue construct. The model is analysed using analytic and numerical techniques, the results of which illustrate the potential use of the model to predict the dominant regulatory stimuli in a cell population.
Resumo:
OBJECTIVES: Due to the high prevalence of renal failure in transcatheter aortic valve replacement (TAVR) candidates, a non-contrast MR technique is desirable for pre-procedural planning. We sought to evaluate the feasibility of a novel, non-contrast, free-breathing, self-navigated three-dimensional (SN3D) MR sequence for imaging the aorta from its root to the iliofemoral run-off in comparison to non-contrast two-dimensional-balanced steady-state free-precession (2D-bSSFP) imaging. METHODS: SN3D [field of view (FOV), 220-370 mm(3); slice thickness, 1.15 mm; repetition/echo time (TR/TE), 3.1/1.5 ms; and flip angle, 115°] and 2D-bSSFP acquisitions (FOV, 340 mm; slice thickness, 6 mm; TR/TE, 2.3/1.1 ms; flip angle, 77°) were performed in 10 healthy subjects (all male; mean age, 30.3 ± 4.3 yrs) using a 1.5-T MRI system. Aortic root measurements and qualitative image ratings (four-point Likert-scale) were compared. RESULTS: The mean effective aortic annulus diameter was similar for 2D-bSSFP and SN3D (26.7 ± 0.7 vs. 26.1 ± 0.9 mm, p = 0.23). The mean image quality of 2D-bSSFP (4; IQR 3-4) was rated slightly higher (p = 0.03) than SN3D (3; IQR 2-4). The mean total acquisition time for SN3D imaging was 12.8 ± 2.4 min. CONCLUSIONS: Our results suggest that a novel SN3D sequence allows rapid, free-breathing assessment of the aortic root and the aortoiliofemoral system without administration of contrast medium. KEY POINTS: • The prevalence of renal failure is high among TAVR candidates. • Non-contrast 3D MR angiography allows for TAVR procedure planning. • The self-navigated sequence provides a significantly reduced scanning time.
Resumo:
We present topological derivative and energy based procedures for the imaging of micro and nano structures using one beam of visible light of a single wavelength. Objects with diameters as small as 10 nm can be located and their position tracked with nanometer precision. Multiple objects dis-tributed either on planes perpendicular to the incidence direction or along axial lines in the incidence direction are distinguishable. More precisely, the shape and size of plane sections perpendicular to the incidence direction can be clearly determined, even for asymmetric and nonconvex scatterers. Axial resolution improves as the size of the objects decreases. Initial reconstructions may proceed by gluing together two-dimensional horizontal slices between axial peaks or by locating objects at three-dimensional peaks of topological energies, depending on the effective wavenumber. Below a threshold size, topological derivative based iterative schemes improve initial predictions of the lo-cation, size, and shape of objects by postprocessing fixed measured data. For larger sizes, tracking the peaks of topological energy fields that average information from additional incident light beams seems to be more effective.
Resumo:
Rhizobium freirei PRF 81 is employed in common bean commercial inoculants in Brazil, due to its outstanding efficiency in fixing nitrogen, competitiveness and tolerance to abiotic stresses. Among the environmental conditions faced by rhizobia in soils, acidity is perhaps the encountered most, especially in Brazil. So, we used proteomics based approaches to study the responses of PRF 81 to a low pH condition. R. freirei PRF 81 was grown in TY medium until exponential phase in two treatments: pH 6,8 and pH 4,8. Whole-cell proteins were extracted and separated by two-dimensional gel electrophoresis, using IPG-strips with pH range 4-7 and 12% polyacrilamide gels. The experiment was performed in triplicate. Protein spots were detected in the high-resolution digitized gel images and analyzed by Image Master 2D Platinum v 5.0 software. Relative volumes (%vol) of compared between the two conditions tested and were statistically evaluated (p ≤ 0.05). Even knowing that R. freirei PRF 81 can still grow in more acid conditions, pH 4.8 was chosen because didn´t affect significantly the bacterial growth kinetics, a factor that could compromise the analysis. Using a narrow pH range, the gel profiles displayed a better resolution and reprodutibility than using broader pH range. Spots were mostly concentrated between pH 5-7 and molecular masses between 17-95 kDa. From the six hundred well-defined spots analyzed, one hundred and sixty-three spots presented a significant change in % vol, indicating that the pH led to expressive changes in the proteome of R. freirei PRF 81. Of these, sixty-one were up-regulated and one hundred two was downregulated in pH 4.8 condition. Also, fourteen spots were only identified in the acid condition, while seven spots was exclusively detected in pH 6.8. Ninety-five differentially expressed spots and two exclusively detected in pH 4,8 were selected for Maldi-Tof identification. Together with the genome sequencing and the proteome analysis of heat stress, we will search for molecular determinants of PRF 81 related to capacity to adapt to stressful tropical conditions.
Resumo:
Los problemas de corte y empaquetado son una familia de problemas de optimización combinatoria que han sido ampliamente estudiados en numerosas áreas de la industria y la investigación, debido a su relevancia en una enorme variedad de aplicaciones reales. Son problemas que surgen en muchas industrias de producción donde se debe realizar la subdivisión de un material o espacio disponible en partes más pequeñas. Existe una gran variedad de métodos para resolver este tipo de problemas de optimización. A la hora de proponer un método de resolución para un problema de optimización, es recomendable tener en cuenta el enfoque y las necesidades que se tienen en relación al problema y su solución. Las aproximaciones exactas encuentran la solución óptima, pero sólo es viable aplicarlas a instancias del problema muy pequeñas. Las heurísticas manejan conocimiento específico del problema para obtener soluciones de alta calidad sin necesitar un excesivo esfuerzo computacional. Por otra parte, las metaheurísticas van un paso más allá, ya que son capaces de resolver una clase muy general de problemas computacionales. Finalmente, las hiperheurísticas tratan de automatizar, normalmente incorporando técnicas de aprendizaje, el proceso de selección, combinación, generación o adaptación de heurísticas más simples para resolver eficientemente problemas de optimización. Para obtener lo mejor de estos métodos se requiere conocer, además del tipo de optimización (mono o multi-objetivo) y el tamaño del problema, los medios computacionales de los que se dispone, puesto que el uso de máquinas e implementaciones paralelas puede reducir considerablemente los tiempos para obtener una solución. En las aplicaciones reales de los problemas de corte y empaquetado en la industria, la diferencia entre usar una solución obtenida rápidamente y usar propuestas más sofisticadas para encontrar la solución óptima puede determinar la supervivencia de la empresa. Sin embargo, el desarrollo de propuestas más sofisticadas y efectivas normalmente involucra un gran esfuerzo computacional, que en las aplicaciones reales puede provocar una reducción de la velocidad del proceso de producción. Por lo tanto, el diseño de propuestas efectivas y, al mismo tiempo, eficientes es fundamental. Por esta razón, el principal objetivo de este trabajo consiste en el diseño e implementación de métodos efectivos y eficientes para resolver distintos problemas de corte y empaquetado. Además, si estos métodos se definen como esquemas lo más generales posible, se podrán aplicar a diferentes problemas de corte y empaquetado sin realizar demasiados cambios para adaptarlos a cada uno. Así, teniendo en cuenta el amplio rango de metodologías de resolución de problemas de optimización y las técnicas disponibles para incrementar su eficiencia, se han diseñado e implementado diversos métodos para resolver varios problemas de corte y empaquetado, tratando de mejorar las propuestas existentes en la literatura. Los problemas que se han abordado han sido: el Two-Dimensional Cutting Stock Problem, el Two-Dimensional Strip Packing Problem, y el Container Loading Problem. Para cada uno de estos problemas se ha realizado una amplia y minuciosa revisión bibliográfica, y se ha obtenido la solución de las distintas variantes escogidas aplicando diferentes métodos de resolución: métodos exactos mono-objetivo y paralelizaciones de los mismos, y métodos aproximados multi-objetivo y paralelizaciones de los mismos. Los métodos exactos mono-objetivo aplicados se han basado en técnicas de búsqueda en árbol. Por otra parte, como métodos aproximados multi-objetivo se han seleccionado unas metaheurísticas multi-objetivo, los MOEAs. Además, para la representación de los individuos utilizados por estos métodos se han empleado codificaciones directas mediante una notación postfija, y codificaciones que usan heurísticas de colocación e hiperheurísticas. Algunas de estas metodologías se han mejorado utilizando esquemas paralelos haciendo uso de las herramientas de programación OpenMP y MPI.
Resumo:
O presente trabalho pretende apresentar e descrever a metodologia processual e respectivas relexões que sustentam a criação de um artefato digital interativo construído de forma a que alguns dos elementos bidimensionais que o constituem sejam manipuladas de forma a que criem ao jogador a ilusão de tridimensionalidade. Em 1992 a Id Software com o jogo Wolfenstein 3D, introduziu uma referência visual à tridimensionalidade, utilizando para o efeito tecnologia 2D, a qual, através de um sistema de redimensionamento e posicionamento de imagens, consegui transmitir a noção de tridimensionalidade, utilizando na altura um tipo de jogo em primeira pessoa, ou seja, o jogador experiência uma campo visual que visa reproduzir a própria experiência do mundo táctil na relação que dispõe entre os espaços e objetos. Através do Processing, uma linguagem de programação que assenta no Java, estes objetivos conceptuais serão reproduzidos, que procuram, por um lado, transmitir esta aparente ilusão de tridimensionalidade e por outro não utilizar um tipo de artefacto digital que proporciona uma jogabilidade em primeira pessoa mas sim possibilitam ao jogador uma experiência visual que aborda todo o espaço em que é lhe permitido circular, no qual é lhe exposto as adversidades que precisa de superar para progredir. Para que isto seja possível o jogador assume o papel de um personagem e através da sua interação com o artefato, vai ediicando uma narrativa visual que visa o seu envolvimento com a temática representada. Como tema é utilizada uma representação da busca pelo Sarcófago do faraó da 18ª Dinastia Tutankamón (1332 - 1323 AC) pelo explorador britânico Howard Carter (1874 - 1939) cuja expedição no Vales do Reis em 1922 constitui ainda hoje a mais célebre descoberta arqueológica relacionada com Antigo Egipto. Ao longo desta Dissertação são abordados temas que visam resoluções tanto no campo técnico e tecnológico, através da programação e sua linguagem, como no campo visual e estético que visa uma conexão consciente com a temática a representar e a ser experienciada
Resumo:
Descreve-se, no presente trabalho, os esforços envidados no sentido de criar uma solução informática generalista, para os problemas mais recorrentes do processo de produção de videojogos 20, baseados em sprites, a correr em plataformas móveis. O sistema desenvolvido é uma aplicação web que está inserida no paradigma cloudcomputing, usufruindo, portanto, de todas as vantagens em termos de acessibilidade, segurança da informação e manutenção que este paradigma oferece actualmente. Além das questões funcionais, a aplicação é ainda explorada do ponto de vista da arquitetura da implementação, com vista a garantir um sistema com implementação escalável, adaptável e de fácil manutenção. Propõe-se ainda um algoritmo que foi desenvolvido para resolver o problema de obter uma distribuição espacial otimizada de várias áreas retangulares, sem sobreposições nem restrições a nível das dimensões, quer do arranjo final, quer das áreas arranjadas. ABSTRACT: This document describes the efforts taken to create a generic computing solution for the most recurrent problems found in the production of two dimensional, spritebased videogames, running on mobile platforms. The developed system is a web application that fits within the scope of the recent cloud-computing paradigm and, therefore, enjoys all of its advantages in terms of data safety, accessibility and application maintainability. In addition, to the functional issues, the system is also studied in terms of its internal software architecture, since it was planned and implemented in the perspective of attaining an easy to maintain application, that is both scalable and adaptable. Furthermore, it is also proposed an algorithm that aims to find an optimized solution to the space distribution problem of several rectangular areas, with no overlapping and no dimensinal restrictions, neither on the final arrangement nor on the arranged areas.
Resumo:
In the Hydrocarbon exploration activities, the great enigma is the location of the deposits. Great efforts are undertaken in an attempt to better identify them, locate them and at the same time, enhance cost-effectiveness relationship of extraction of oil. Seismic methods are the most widely used because they are indirect, i.e., probing the subsurface layers without invading them. Seismogram is the representation of the Earth s interior and its structures through a conveniently disposed arrangement of the data obtained by seismic reflection. A major problem in this representation is the intensity and variety of present noise in the seismogram, as the surface bearing noise that contaminates the relevant signals, and may mask the desired information, brought by waves scattered in deeper regions of the geological layers. It was developed a tool to suppress these noises based on wavelet transform 1D and 2D. The Java language program makes the separation of seismic images considering the directions (horizontal, vertical, mixed or local) and bands of wavelengths that form these images, using the Daubechies Wavelets, Auto-resolution and Tensor Product of wavelet bases. Besides, it was developed the option in a single image, using the tensor product of two-dimensional wavelets or one-wavelet tensor product by identities. In the latter case, we have the wavelet decomposition in a two dimensional signal in a single direction. This decomposition has allowed to lengthen a certain direction the two-dimensional Wavelets, correcting the effects of scales by applying Auto-resolutions. In other words, it has been improved the treatment of a seismic image using 1D wavelet and 2D wavelet at different stages of Auto-resolution. It was also implemented improvements in the display of images associated with breakdowns in each Auto-resolution, facilitating the choices of images with the signals of interest for image reconstruction without noise. The program was tested with real data and the results were good
Resumo:
This thesis presents studies of the role of disorder in non-equilibrium quantum systems. The quantum states relevant to dynamics in these systems are very different from the ground state of the Hamiltonian. Two distinct systems are studied, (i) periodically driven Hamiltonians in two dimensions, and (ii) electrons in a one-dimensional lattice with power-law decaying hopping amplitudes. In the first system, the novel phases that are induced from the interplay of periodic driving, topology and disorder are studied. In the second system, the Anderson transition in all the eigenstates of the Hamiltonian are studied, as a function of the power-law exponent of the hopping amplitude.
In periodically driven systems the study focuses on the effect of disorder in the nature of the topology of the steady states. First, we investigate the robustness to disorder of Floquet topological insulators (FTIs) occurring in semiconductor quantum wells. Such FTIs are generated by resonantly driving a transition between the valence and conduction band. We show that when disorder is added, the topological nature of such FTIs persists as long as there is a gap at the resonant quasienergy. For strong enough disorder, this gap closes and all the states become localized as the system undergoes a transition to a trivial insulator.
Interestingly, the effects of disorder are not necessarily adverse, disorder can also induce a transition from a trivial to a topological system, thereby establishing a Floquet Topological Anderson Insulator (FTAI). Such a state would be a dynamical realization of the topological Anderson insulator. We identify the conditions on the driving field necessary for observing such a transition. We realize such a disorder induced topological Floquet spectrum in the driven honeycomb lattice and quantum well models.
Finally, we show that two-dimensional periodically driven quantum systems with spatial disorder admit a unique topological phase, which we call the anomalous Floquet-Anderson insulator (AFAI). The AFAI is characterized by a quasienergy spectrum featuring chiral edge modes coexisting with a fully localized bulk. Such a spectrum is impossible for a time-independent, local Hamiltonian. These unique characteristics of the AFAI give rise to a new topologically protected nonequilibrium transport phenomenon: quantized, yet nonadiabatic, charge pumping. We identify the topological invariants that distinguish the AFAI from a trivial, fully localized phase, and show that the two phases are separated by a phase transition.
The thesis also present the study of disordered systems using Wegner's Flow equations. The Flow Equation Method was proposed as a technique for studying excited states in an interacting system in one dimension. We apply this method to a one-dimensional tight binding problem with power-law decaying hoppings. This model presents a transition as a function of the exponent of the decay. It is shown that the the entire phase diagram, i.e. the delocalized, critical and localized phases in these systems can be studied using this technique. Based on this technique, we develop a strong-bond renormalization group that procedure where we solve the Flow Equations iteratively. This renormalization group approach provides a new framework to study the transition in this system.
Resumo:
Electrical impedance tomography is applied to the problem of detecting, locating, and tracking fractures in ballistics gelatin. The hardware developed is intended to be physically robust and based on off-the-shelf hardware. Fractures were created in two separate ways: by shooting a .22 caliber bullet into the gelatin and by injecting saline solution into the gelatin. The .22 caliber bullet created an air gap, which was seen as an increase in resistivity. The saline solution created a fluid filled gap, which was seen as a decrease in resistivity. A double linear array was used to take data for each of the fracture mechanisms and a two dimensional cross section was inverted from the data. The results were validated by visually inspecting the samples during the fracture event. It was found that although there were reconstruction errors present, it was possible to reconstruct a representation of the resistive cross section. Simulations were performed to better understand the reconstructed cross-sections and to demonstrate the ability of a ring array, which was not experimentally tested.
Resumo:
Tetrachloroethene (PCE) and trichloroethene (TCE) form dense non-aqueous phase liquids (DNAPLs), which are persistent groundwater contaminants. DNAPL dissolution can be "bioenhanced" via dissolved contaminant biodegradation at the DNAPL-water interface. This research hypothesized that: (1) competitive interactions between different dehalorespiring strains can significantly impact the bioenhancement effect, and extent of PCE dechlorination; and (2) hydrodynamics will affect the outcome of competition and the potential for bioenhancement and detoxification. A two-dimensional coupled flowtransport model was developed, with a DNAPL pool source and multiple microbial species. In the scenario presented, Dehalococcoides mccartyi 195 competes with Desulfuromonas michiganensis for the electron acceptors PCE and TCE. Simulations under biostimulation and low velocity (vx) conditions suggest that the bioenhancement with Dsm. michiganensis alone was modestly increased by Dhc. mccartyi 195. However, the presence of Dhc. mccartyi 195 enhanced the extent of PCE transformation. Hydrodynamic conditions impacted the results by changing the dominant population under low and high vx conditions.
Resumo:
3D film’s explicit new space depth arguably provides both an enhanced realistic quality to the image and a wealth of more acute visual and haptic sensations (a ‘montage of attractions’) to the increasingly involved spectator. But David Cronenberg’s related ironic remark that ‘cinema as such is from the outset a «special effect»’ should warn us against the geometrical naiveté of such assumptions, within a Cartesian ocularcentric tradition for long overcome by Merleau-Ponty’s embodiment of perception and Deleuze’s notion of the self-consistency of the artistic sensation and space. Indeed, ‘2D’ traditional cinema already provides the accomplished «fourth wall effect», enclosing the beholder behind his back within a space that no longer belongs to the screen (nor to ‘reality’) as such, and therefore is no longer ‘illusorily’ two-dimensional. This kind of totally absorbing, ‘dream-like’ space, metaphorical for both painting and cinema, is illustrated by the episode ‘Crows’ in Kurosawa’s Dreams. Such a space requires the actual effacement of the empirical status of spectator, screen and film as separate dimensions, and it is precisely the 3D caracteristic unfolding of merely frontal space layers (and film events) out of the screen towards us (and sometimes above the heads of the spectators before us) that reinstalls at the core of the film-viewing phenomenon a regressive struggle with reality and with different degrees of realism, originally overcome by film since the Lumière’s Arrival of a Train at Ciotat seminal demonstration. Through an analysis of crucial aspects in Avatar and the recent Cave of Forgotten Dreams, both dealing with historical and ontological deepening processes of ‘going inside’, we shall try to show how the formal and technically advanced component of those 3D-depth films impairs, on the contrary, their apparent conceptual purpose on the level of contents, and we will assume, drawing on Merleau-Ponty and Deleuze, that this technological mistake is due to a lack of recognition of the nature of perception and sensation in relation to space and human experience.
Resumo:
A servo-controlled automatic machine can perform tasks that involve synchronized actuation of a significant number of servo-axes, namely one degree-of-freedom (DoF) electromechanical actuators. Each servo-axis comprises a servo-motor, a mechanical transmission and an end-effector, and is responsible for generating the desired motion profile and providing the power required to achieve the overall task. The design of a such a machine must involve a detailed study from a mechatronic viewpoint, due to its electric and mechanical nature. The first objective of this thesis is the development of an overarching electromechanical model for a servo-axis. Every loss source is taken into account, be it mechanical or electrical. The mechanical transmission is modeled by means of a sequence of lumped-parameter blocks. The electric model of the motor and the inverter takes into account winding losses, iron losses and controller switching losses. No experimental characterizations are needed to implement the electric model, since the parameters are inferred from the data available in commercial catalogs. With the global model at disposal, a second objective of this work is to perform the optimization analysis, in particular, the selection of the motor-reducer unit. The optimal transmission ratios that minimize several objective functions are found. An optimization process is carried out and repeated for each candidate motor. Then, we present a novel method where the discrete set of available motor is extended to a continuous domain, by fitting manufacturer data. The problem becomes a two-dimensional nonlinear optimization subject to nonlinear constraints, and the solution gives the optimal choice for the motor-reducer system. The presented electromechanical model, along with the implementation of optimization algorithms, forms a complete and powerful simulation tool for servo-controlled automatic machines. The tool allows for determining a wide range of electric and mechanical parameters and the behavior of the system in different operating conditions.
Resumo:
This dissertation aims at developing advanced analytical tools able to model surface waves propagating in elastic metasurfaces. In particular, four different objectives are defined and pursued throughout this work to enrich the description of the metasurface dynamics. First, a theoretical framework is developed to describe the dispersion properties of a seismic metasurface composed of discrete resonators placed on a porous medium considering part of it fully saturated. Such a model combines classical elasticity theory, Biot’s poroelasticity and an effective medium approach to describe the metasurface dynamics and its coupling with the poroelastic substrate. Second, an exact formulation based on the multiple scattering theory is developed to extend the two-dimensional classical Lamb’s problem to the case of an elastic half-space coupled to an arbitrary number of discrete surface resonators. To this purpose, the incident wavefield generated by a harmonic source and the scattered field generated by each resonator are calculated. The substrate wavefield is then obtained as solutions of the coupled problem due to the interference of the incident field and the multiple scattered fields of the oscillators. Third, the above discussed formulation is extended to three-dimensional contexts. The purpose here is to investigate the dynamic behavior and the topological properties of quasiperiodic elastic metasurfaces. Finally, the multiple scattering formulation is extended to model flexural metasurfaces, i.e., an array of thin plates. To this end, the resonant plates are modeled by means of their equivalent impedance, derived by exploiting the Kirchhoff plate theory. The proposed formulation permits the treatment of a general flexural metasurface, with no limitation on the number of plates and the configuration taken into account. Overall, the proposed analytical tools could pave the way for a better understanding of metasurface dynamics and their implementation in engineered devices.
Resumo:
A three-dimensional Direct Finite Element procedure is here presented which takes into account most of the factors affecting the interaction problem of the dam-water-foundation system, whilst keeping the computational cost at a reasonable level by introducing some simplified hypotheses. A truncated domain is defined, and the dynamic behaviour of the system is treated as a wave-scattering problem where the presence of the dam perturbs an original free-field system. The rock foundation truncated boundaries are enclosed by a set of free-field one-dimensional and two-dimensional systems which transmit the effective forces to the main model and apply adsorbing viscous boundaries to ensure radiation damping. The water domain is treated as an added mass moving with the dam. A strategy is proposed to keep the viscous dampers at the boundaries unloaded during the initial phases of analysis, when the static loads are initialised, and thus avoid spurious displacements. A focus is given to the nonlinear behaviour of the rock foundation, with concentrated plasticity along the natural discontinuities of the rock mass, immersed in an otherwise linear elastic medium with Rayleigh damping. The entire procedure is implemented in the commercial software Abaqus®, whose base code is enriched with specific user subroutines when needed. All the extra coding is attached to the Thesis and tested against analytical results and simple examples. Possible rock wedge instabilities induced by intense ground motion, which are not easily investigated within a comprehensive model of the dam-water-foundation system, are treated separately with a simplified decoupled dynamic approach derived from the classical Newmark method, integrated with FE calculation of dam thrust on the wedges during the earthquake. Both the described approaches are applied to the case study of the Ridracoli arch-gravity dam (Italy) in order to investigate its seismic response to the Maximum Credible Earthquake (MCE) in a full reservoir condition.