987 resultados para solution structures
Resumo:
In vielen Industriezweigen, zum Beispiel in der Automobilindustrie, werden Digitale Versuchsmodelle (Digital MockUps) eingesetzt, um die Konstruktion und die Funktion eines Produkts am virtuellen Prototypen zu überprüfen. Ein Anwendungsfall ist dabei die Überprüfung von Sicherheitsabständen einzelner Bauteile, die sogenannte Abstandsanalyse. Ingenieure ermitteln dabei für bestimmte Bauteile, ob diese in ihrer Ruhelage sowie während einer Bewegung einen vorgegeben Sicherheitsabstand zu den umgebenden Bauteilen einhalten. Unterschreiten Bauteile den Sicherheitsabstand, so muss deren Form oder Lage verändert werden. Dazu ist es wichtig, die Bereiche der Bauteile, welche den Sicherhabstand verletzen, genau zu kennen. rnrnIn dieser Arbeit präsentieren wir eine Lösung zur Echtzeitberechnung aller den Sicherheitsabstand unterschreitenden Bereiche zwischen zwei geometrischen Objekten. Die Objekte sind dabei jeweils als Menge von Primitiven (z.B. Dreiecken) gegeben. Für jeden Zeitpunkt, in dem eine Transformation auf eines der Objekte angewendet wird, berechnen wir die Menge aller den Sicherheitsabstand unterschreitenden Primitive und bezeichnen diese als die Menge aller toleranzverletzenden Primitive. Wir präsentieren in dieser Arbeit eine ganzheitliche Lösung, welche sich in die folgenden drei großen Themengebiete unterteilen lässt.rnrnIm ersten Teil dieser Arbeit untersuchen wir Algorithmen, die für zwei Dreiecke überprüfen, ob diese toleranzverletzend sind. Hierfür präsentieren wir verschiedene Ansätze für Dreiecks-Dreiecks Toleranztests und zeigen, dass spezielle Toleranztests deutlich performanter sind als bisher verwendete Abstandsberechnungen. Im Fokus unserer Arbeit steht dabei die Entwicklung eines neuartigen Toleranztests, welcher im Dualraum arbeitet. In all unseren Benchmarks zur Berechnung aller toleranzverletzenden Primitive beweist sich unser Ansatz im dualen Raum immer als der Performanteste.rnrnDer zweite Teil dieser Arbeit befasst sich mit Datenstrukturen und Algorithmen zur Echtzeitberechnung aller toleranzverletzenden Primitive zwischen zwei geometrischen Objekten. Wir entwickeln eine kombinierte Datenstruktur, die sich aus einer flachen hierarchischen Datenstruktur und mehreren Uniform Grids zusammensetzt. Um effiziente Laufzeiten zu gewährleisten ist es vor allem wichtig, den geforderten Sicherheitsabstand sinnvoll im Design der Datenstrukturen und der Anfragealgorithmen zu beachten. Wir präsentieren hierzu Lösungen, die die Menge der zu testenden Paare von Primitiven schnell bestimmen. Darüber hinaus entwickeln wir Strategien, wie Primitive als toleranzverletzend erkannt werden können, ohne einen aufwändigen Primitiv-Primitiv Toleranztest zu berechnen. In unseren Benchmarks zeigen wir, dass wir mit unseren Lösungen in der Lage sind, in Echtzeit alle toleranzverletzenden Primitive zwischen zwei komplexen geometrischen Objekten, bestehend aus jeweils vielen hunderttausend Primitiven, zu berechnen. rnrnIm dritten Teil präsentieren wir eine neuartige, speicheroptimierte Datenstruktur zur Verwaltung der Zellinhalte der zuvor verwendeten Uniform Grids. Wir bezeichnen diese Datenstruktur als Shrubs. Bisherige Ansätze zur Speicheroptimierung von Uniform Grids beziehen sich vor allem auf Hashing Methoden. Diese reduzieren aber nicht den Speicherverbrauch der Zellinhalte. In unserem Anwendungsfall haben benachbarte Zellen oft ähnliche Inhalte. Unser Ansatz ist in der Lage, den Speicherbedarf der Zellinhalte eines Uniform Grids, basierend auf den redundanten Zellinhalten, verlustlos auf ein fünftel der bisherigen Größe zu komprimieren und zur Laufzeit zu dekomprimieren.rnrnAbschießend zeigen wir, wie unsere Lösung zur Berechnung aller toleranzverletzenden Primitive Anwendung in der Praxis finden kann. Neben der reinen Abstandsanalyse zeigen wir Anwendungen für verschiedene Problemstellungen der Pfadplanung.
Resumo:
The discovery of binary dendritic events such as local NMDA spikes in dendritic subbranches led to the suggestion that dendritic trees could be computationally equivalent to a 2-layer network of point neurons, with a single output unit represented by the soma, and input units represented by the dendritic branches. Although this interpretation endows a neuron with a high computational power, it is functionally not clear why nature would have preferred the dendritic solution with a single but complex neuron, as opposed to the network solution with many but simple units. We show that the dendritic solution has a distinguished advantage over the network solution when considering different learning tasks. Its key property is that the dendritic branches receive an immediate feedback from the somatic output spike, while in the corresponding network architecture the feedback would require additional backpropagating connections to the input units. Assuming a reinforcement learning scenario we formally derive a learning rule for the synaptic contacts on the individual dendritic trees which depends on the presynaptic activity, the local NMDA spikes, the somatic action potential, and a delayed reinforcement signal. We test the model for two scenarios: the learning of binary classifications and of precise spike timings. We show that the immediate feedback represented by the backpropagating action potential supplies the individual dendritic branches with enough information to efficiently adapt their synapses and to speed up the learning process.
Resumo:
The discovery of binary dendritic events such as local NMDA spikes in dendritic subbranches led to the suggestion that dendritic trees could be computationally equivalent to a 2-layer network of point neurons, with a single output unit represented by the soma, and input units represented by the dendritic branches. Although this interpretation endows a neuron with a high computational power, it is functionally not clear why nature would have preferred the dendritic solution with a single but complex neuron, as opposed to the network solution with many but simple units. We show that the dendritic solution has a distinguished advantage over the network solution when considering different learning tasks. Its key property is that the dendritic branches receive an immediate feedback from the somatic output spike, while in the corresponding network architecture the feedback would require additional backpropagating connections to the input units. Assuming a reinforcement learning scenario we formally derive a learning rule for the synaptic contacts on the individual dendritic trees which depends on the presynaptic activity, the local NMDA spikes, the somatic action potential, and a delayed reinforcement signal. We test the model for two scenarios: the learning of binary classifications and of precise spike timings. We show that the immediate feedback represented by the backpropagating action potential supplies the individual dendritic branches with enough information to efficiently adapt their synapses and to speed up the learning process.
Resumo:
Molecular dynamics simulations have been used to explore the conformational flexibility of a PNA·DNA·PNA triple helix in aqueous solution. Three 1.05 ns trajectories starting from different but reasonable conformations have been generated and analyzed in detail. All three trajectories converge within about 300 ps to produce stable and very similar conformational ensembles, which resemble the crystal structure conformation in many details. However, in contrast to the crystal structure, there is a tendency for the direct hydrogen-bonds observed between the amide hydrogens of the Hoogsteen-binding PNA strand and the phosphate oxygens of the DNA strand to be replaced by water-mediated hydrogen bonds, which also involve pyrimidine O2 atoms. This structural transition does not appear to weaken the triplex structure but alters groove widths and so may relate to the potential for recognition of such structures by other ligands (small molecules or proteins). Energetic analysis leads us to conclude that the reason that the hybrid PNA/DNA triplex has quite different helical characteristics from the all-DNA triplex is not because the additional flexibility imparted by the replacement of sugar−phosphate by PNA backbones allows motions to improve base-stacking but rather that base-stacking interactions are very similar in both types of triplex and the driving force comes from weak but definate conformational preferences of the PNA strands.
Resumo:
Potential energy curves have been computed for [C2H6]2+ ions and the results used to interpret the conspicuous absence of these ions in 2E mass spectra and in charge-stripping experiments. The energies and structures of geometry-optimized ground-state singlet and excited-state triplet [C2H6]2+ ions have been determined along with energies for different decomposition barriers and dissociation asymptotes. Although singlet and triplet [C2H6]2+ ions can exist as stable entities, they possess low energy barriers to decomposition. Vertical Franck-Condon transitions, involving electron impact ionization of ethane as well as charge-stripping collisions of [C2H6]+ ions, produce [C2H6]2+ ions which promptly dissociate since they are formed with energies in excess of various decomposition barriers. Appearance energies computed for doubly-charged ethane fragment ions are in accordance with experimental values.
Resumo:
OBJECTIVE: The voluntary control of micturition is believed to be integrated by complex interactions among the brainstem, subcortical areas and cortical areas. Several brain imaging studies using positron emission tomography (PET) have demonstrated that frontal brain areas, the limbic system, the pons and the premotor cortical areas were involved. However, the cortical and subcortical brain areas have not yet been precisely identified and their exact function is not yet completely understood. MATERIALS AND METHODS: This study used functional magnetic resonance imaging (fMRI) to compare brain activity during passive filling and emptying of the bladder. A cathetherism of the bladder was performed in seven healthy subjects (one man and six right-handed women). During scanning, the bladder was alternatively filled and emptied at a constant rate with bladder rincing solution. RESULTS: Comparison between passive filling of the bladder and emptying of the bladder showed an increased brain activity in the right inferior frontal gyrus, cerebellum, symmetrically in the operculum and mesial frontal. Subcortical areas were not evaluated. CONCLUSIONS: Our results suggest that several cortical brain areas are involved in the regulation of micturition.
Resumo:
Postmortem minimal invasive angiography has already been implemented to support virtual autopsy examinations. An experimental approach in a porcine model to overcome an initially described artificial tissue edema artifact by using a poly ethylene glycol (PEG) containing contrast agent solution showed promising results. The present publication describes the first application of PEG in a whole corpse angiographic CT examination. A minimal invasive postmortem CT angiography was performed in a human corpse utilizing the high viscosity contrast agent solution containing 65% of PEG. Injection was carried out via the femoral artery into the aortic root in simulated cardiac output conditions. Subsequent CT scanning delivered the 3D volume data of the whole corpse. Visualization of the human arterial anatomy was excellent and the contrast agent distribution was generally limited to the arterial system as intended. As exceptions an enhancement of the brain, the left ventricular myocardium and the renal cortex became obvious. This most likely represented the stage of centralization of the blood circulation at the time of death with dilatation of the precapillary arterioles within these tissues. Especially for the brain this resulted in a distinctively improved visualization of the intracerebral structures by CT. However, the general tissue edema artifact of postmortem minimal invasive angiography examinations could be distinctively reduced.
Resumo:
Traditional transportation fuel, petroleum, is limited and nonrenewable, and it also causes pollutions. Hydrogen is considered one of the best alternative fuels for transportation. The key issue for using hydrogen as fuel for transportation is hydrogen storage. Lithium nitride (Li3N) is an important material which can be used for hydrogen storage. The decompositions of lithium amide (LiNH2) and lithium imide (Li2NH) are important steps for hydrogen storage in Li3N. The effect of anions (e.g. Cl-) on the decomposition of LiNH2 has never been studied. Li3N can react with LiBr to form lithium nitride bromide Li13N4Br which has been proposed as solid electrolyte for batteries. The decompositions of LiNH2 and Li2NH with and without promoter were investigated by using temperature programmed decomposition (TPD) and X-ray diffraction (XRD) techniques. It was found that the decomposition of LiNH2 produced Li2NH and NH3 via two steps: LiNH2 into a stable intermediate species (Li1.5NH1.5) and then into Li2NH. The decomposition of Li2NH produced Li, N2 and H2 via two steps: Li2NH into an intermediate species --- Li4NH and then into Li. The kinetic analysis of Li2NH decomposition showed that the activation energies are 533.6 kJ/mol for the first step and 754.2 kJ/mol for the second step. Furthermore, XRD demonstrated that the Li4NH, which was generated in the decomposition of Li2NH, formed a solid solution with Li2NH. In the solid solution, Li4NH possesses a similar cubic structure as Li2NH. The lattice parameter of the cubic Li4NH is 0.5033nm. The decompositions of LiNH2 and Li2NH can be promoted by chloride ion (Cl-). The introduction of Cl- into LiNH2 resulted in the generation of a new NH3 peak at low temperature of 250 °C besides the original NH3 peak at 330 °C in TPD profiles. Furthermore, Cl- can decrease the decomposition temperature of Li2NH by about 110 °C. The degradation of Li3N was systematically investigated with techniques of XRD, Fourier transform infrared (FT-IR) spectroscopy, and UV-visible spectroscopy. It was found that O2 could not affect Li3N at room temperature. However, H2O in air can cause the degradation of Li3N due to the reaction between H2O and Li3N to LiOH. The produced LiOH can further react with CO2 in air to Li2CO3 at room temperature. Furthermore, it was revealed that Alfa-Li3N is more stable in air than Beta-Li3N. The chemical stability of Li13N4Br in air has been investigated by XRD, TPD-MS, and UV-vis absorption as a function of time. The aging process finally leads to the degradation of the Li13N4Br into Li2CO3, lithium bromite (LiBrO2) and the release of gaseous NH3. The reaction order n = 2.43 is the best fitting for the Li13N4Br degradation in air reaction. Li13N4Br energy gap was calculated to be 2.61 eV.
Resumo:
The pH response of GaN/AlInN/AlN/GaN ion-sensitive field effect transistor (ISFET) on Si substrates has been characterized. We analyzed the variation of the surface potential (ΔVsp/ΔpH) and current (ΔIds/ΔpH) with solution pH in devices with the same indium content (17%, in-plane lattice-matched to GaN) and different AlInN thickness (6 nm and 10 nm), and compared with the literature. The shrinkage of the barrier, that has the effect to increase the transconductance of the device, makes the 2-dimensional electron density (2DEG) at the interface very sensitive to changes in the surface. Although the surface potential sensitivity to pH is similar in the two devices, the current change with pH (ΔIds/ΔpH), when biasing the ISFET by a Ag/AgCl reference electrode, is almost 50% higher in the device with 6 nm AlInN barrier, compared to the device with 10 nm barrier. When measuring the current response (ΔIds/ΔpH) without reference electrode, the device with thinner AlInN layer has a larger response than the thicker one, of a factor of 140%, and that current response without reference electrode is only 22% lower than its maximum response obtained using reference electrode.
Resumo:
This paper addresses the seismic analysis of a deeply embedded non-slender structure hosting the pumping unit of a reservoir. The dynamic response in this type of problems is usually studied under the assumption of a perfectly rigid structure using a sub-structuring procedure (three-step solution) proposed specifically for this hypothesis. Such an approach enables a relatively simple assessment of the importance of some key factors influencing the structural response. In this work, the problem is also solved in a single step using a direct approach in which the structure and surrounding soil are modelled as a coupled system with its actual geometry and flexibility. Results indicate that, quite surprisingly, there are significant differences among prediction using both methods. Furthermore, neglecting the flexibility of the structure leads to a significant underestimation of the spectral accelerations at certain points of the structure.
Resumo:
The aim of this paper is to explain the chloride concentration profiles obtained experimentally from control samples of an offshore platform after 25 years of service life. The platform is located 12 km off the coast of the Brazilian province Rio Grande do Norte, in the north-east of Brazil. The samples were extracted at different orientations and heights above mean sea level. A simple model based on Fick’s second law is considered and compared with a finite element model which takes into account transport of chloride ions by diffusion and convection. Results show that convective flows significantly affect the studied chloride penetrations. The convection velocity is obtained by fitting the finite element solution to the experimental data and seems to be directly proportional to the height above mean sea level and also seems to depend on the orientation of the face of the platform. This work shows that considering solely diffusion as transport mechanism does not allow a good prediction of the chloride profiles. Accounting for capillary suction due to moisture gradients permits a better interpretation of the material’s behaviour.
Resumo:
This paper develops an automatic procedure for the optimal numbering of members and nodes in tree structures. With it the stiffness matrix is optimally conditioned either if a direct solution algorithm or a frontal one is used to solve the system of equations. In spite of its effectiveness, the procedure is strikingly simple and so is the computer program shown below.
Resumo:
An AH (affine hypersurface) structure is a pair comprising a projective equivalence class of torsion-free connections and a conformal structure satisfying a compatibility condition which is automatic in two dimensions. They generalize Weyl structures, and a pair of AH structures is induced on a co-oriented non-degenerate immersed hypersurface in flat affine space. The author has defined for AH structures Einstein equations, which specialize on the one hand to the usual Einstein Weyl equations and, on the other hand, to the equations for affine hyperspheres. Here these equations are solved for Riemannian signature AH structures on compact orientable surfaces, the deformation spaces of solutions are described, and some aspects of the geometry of these structures are related. Every such structure is either Einstein Weyl (in the sense defined for surfaces by Calderbank) or is determined by a pair comprising a conformal structure and a cubic holomorphic differential, and so by a convex flat real projective structure. In the latter case it can be identified with a solution of the Abelian vortex equations on an appropriate power of the canonical bundle. On the cone over a surface of genus at least two carrying an Einstein AH structure there are Monge-Amp`ere metrics of Lorentzian and Riemannian signature and a Riemannian Einstein K"ahler affine metric. A mean curvature zero spacelike immersed Lagrangian submanifold of a para-K"ahler four-manifold with constant para-holomorphic sectional curvature inherits an Einstein AH structure, and this is used to deduce some restrictions on such immersions.
Resumo:
A computer solution to analyze nonprismatic folded plate structures is shown. Arbitrary cross-sections (simple and multiple), continuity over intermediate supports and general loading and longitudinal boundary conditions are dealt with. The folded plates are assumed to be straight and long (beam like structures) and some simplifications are introduced in order to reduce the computational effort. The formulation here presented may be very suitable to be used in the bridge deck analysis.
Resumo:
La necesidad de desarrollar técnicas para predecir la respuesta vibroacústica de estructuras espaciales lia ido ganando importancia en los últimos años. Las técnicas numéricas existentes en la actualidad son capaces de predecir de forma fiable el comportamiento vibroacústico de sistemas con altas o bajas densidades modales. Sin embargo, ambos rangos no siempre solapan lo que hace que sea necesario el desarrollo de métodos específicos para este rango, conocido como densidad modal media. Es en este rango, conocido también como media frecuencia, donde se centra la presente Tesis doctoral, debido a la carencia de métodos específicos para el cálculo de la respuesta vibroacústica. Para las estructuras estudiadas en este trabajo, los mencionados rangos de baja y alta densidad modal se corresponden, en general, con los rangos de baja y alta frecuencia, respectivamente. Los métodos numéricos que permiten obtener la respuesta vibroacústica para estos rangos de frecuencia están bien especificados. Para el rango de baja frecuencia se emplean técnicas deterministas, como el método de los Elementos Finitos, mientras que, para el rango de alta frecuencia las técnicas estadísticas son más utilizadas, como el Análisis Estadístico de la Energía. En el rango de medias frecuencias ninguno de estos métodos numéricos puede ser usado con suficiente precisión y, como consecuencia -a falta de propuestas más específicas- se han desarrollado métodos híbridos que combinan el uso de métodos de baja y alta frecuencia, intentando que cada uno supla las deficiencias del otro en este rango medio. Este trabajo propone dos soluciones diferentes para resolver el problema de la media frecuencia. El primero de ellos, denominado SHFL (del inglés Subsystem based High Frequency Limit procedure), propone un procedimiento multihíbrido en el cuál cada subestructura del sistema completo se modela empleando una técnica numérica diferente, dependiendo del rango de frecuencias de estudio. Con este propósito se introduce el concepto de límite de alta frecuencia de una subestructura, que marca el límite a partir del cual dicha subestructura tiene una densidad modal lo suficientemente alta como para ser modelada utilizando Análisis Estadístico de la Energía. Si la frecuencia de análisis es menor que el límite de alta frecuencia de la subestructura, ésta se modela utilizando Elementos Finitos. Mediante este método, el rango de media frecuencia se puede definir de una forma precisa, estando comprendido entre el menor y el mayor de los límites de alta frecuencia de las subestructuras que componen el sistema completo. Los resultados obtenidos mediante la aplicación de este método evidencian una mejora en la continuidad de la respuesta vibroacústica, mostrando una transición suave entre los rangos de baja y alta frecuencia. El segundo método propuesto se denomina HS-CMS (del inglés Hybrid Substructuring method based on Component Mode Synthesis). Este método se basa en la clasificación de la base modal de las subestructuras en conjuntos de modos globales (que afectan a todo o a varias partes del sistema) o locales (que afectan a una única subestructura), utilizando un método de Síntesis Modal de Componentes. De este modo es posible situar espacialmente los modos del sistema completo y estudiar el comportamiento del mismo desde el punto de vista de las subestructuras. De nuevo se emplea el concepto de límite de alta frecuencia de una subestructura para realizar la clasificación global/local de los modos en la misma. Mediante dicha clasificación se derivan las ecuaciones globales del movimiento, gobernadas por los modos globales, y en las que la influencia del conjunto de modos locales se introduce mediante modificaciones en las mismas (en su matriz dinámica de rigidez y en el vector de fuerzas). Las ecuaciones locales se resuelven empleando Análisis Estadístico de Energías. Sin embargo, este último será un modelo híbrido, en el cual se introduce la potencia adicional aportada por la presencia de los modos globales. El método ha sido probado para el cálculo de la respuesta de estructuras sometidas tanto a cargas estructurales como acústicas. Ambos métodos han sido probados inicialmente en estructuras sencillas para establecer las bases e hipótesis de aplicación. Posteriormente, se han aplicado a estructuras espaciales, como satélites y reflectores de antenas, mostrando buenos resultados, como se concluye de la comparación de las simulaciones y los datos experimentales medidos en ensayos, tanto estructurales como acústicos. Este trabajo abre un amplio campo de investigación a partir del cual es posible obtener metodologías precisas y eficientes para reproducir el comportamiento vibroacústico de sistemas en el rango de la media frecuencia. ABSTRACT Over the last years an increasing need of novel prediction techniques for vibroacoustic analysis of space structures has arisen. Current numerical techniques arc able to predict with enough accuracy the vibro-acoustic behaviour of systems with low and high modal densities. However, space structures are, in general, very complex and they present a range of frequencies in which a mixed behaviour exist. In such cases, the full system is composed of some sub-structures which has low modal density, while others present high modal density. This frequency range is known as the mid-frequency range and to develop methods for accurately describe the vibro-acoustic response in this frequency range is the scope of this dissertation. For the structures under study, the aforementioned low and high modal densities correspond with the low and high frequency ranges, respectively. For the low frequency range, deterministic techniques as the Finite Element Method (FEM) are used while, for the high frequency range statistical techniques, as the Statistical Energy Analysis (SEA), arc considered as more appropriate. In the mid-frequency range, where a mixed vibro-acoustic behaviour is expected, any of these numerical method can not be used with enough confidence level. As a consequence, it is usual to obtain an undetermined gap between low and high frequencies in the vibro-acoustic response function. This dissertation proposes two different solutions to the mid-frequency range problem. The first one, named as The Subsystem based High Frequency Limit (SHFL) procedure, proposes a multi-hybrid procedure in which each sub-structure of the full system is modelled with the appropriate modelling technique, depending on the frequency of study. With this purpose, the concept of high frequency limit of a sub-structure is introduced, marking out the limit above which a substructure has enough modal density to be modelled by SEA. For a certain analysis frequency, if it is lower than the high frequency limit of the sub-structure, the sub-structure is modelled through FEM and, if the frequency of analysis is higher than the high frequency limit, the sub-structure is modelled by SEA. The procedure leads to a number of hybrid models required to cover the medium frequency range, which is defined as the frequency range between the lowest substructure high frequency limit and the highest one. Using this procedure, the mid-frequency range can be define specifically so that, as a consequence, an improvement in the continuity of the vibro-acoustic response function is achieved, closing the undetermined gap between the low and high frequency ranges. The second proposed mid-frequency solution is the Hybrid Sub-structuring method based on Component Mode Synthesis (HS-CMS). The method adopts a partition scheme based on classifying the system modal basis into global and local sets of modes. This classification is performed by using a Component Mode Synthesis, in particular a Craig-Bampton transformation, in order to express the system modal base into the modal bases associated with each sub-structure. Then, each sub-structure modal base is classified into global and local set, fist ones associated with the long wavelength motion and second ones with the short wavelength motion. The high frequency limit of each sub-structure is used as frequency frontier between both sets of modes. From this classification, the equations of motion associated with global modes are derived, which include the interaction of local modes by means of corrections in the dynamic stiffness matrix and the force vector of the global problem. The local equations of motion are solved through SEA, where again interactions with global modes arc included through the inclusion of an additional input power into the SEA model. The method has been tested for the calculation of the response function of structures subjected to structural and acoustic loads. Both methods have been firstly tested in simple structures to establish their basis and main characteristics. Methods are also verified in space structures, as satellites and antenna reflectors, providing good results as it is concluded from the comparison with experimental results obtained in both, acoustic and structural load tests. This dissertation opens a wide field of research through which further studies could be performed to obtain efficient and accurate methodologies to appropriately reproduce the vibro-acoustic behaviour of complex systems in the mid-frequency range.