990 resultados para Domain elimination method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]We present a new strategy, based on the idea of the meccano method and a novel T-mesh optimization procedure, to construct a T-spline parameterization of 2D geometries for the application of isogeometric analysis. The proposed method only demands a boundary representation of the geometry as input data. The algorithm obtains, as a result, high quality parametric transformation between 2D objects and the parametric domain, the unit square. First, we define a parametric mapping between the input boundary of the object and the boundary of the parametric domain. Then, we build a T-mesh adapted to the geometric singularities of the domain in order to preserve the features of the object boundary with a desired tolerance...

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]We present a new method to construct a trivariate T-spline representation of complex solids for the application of isogeometric analysis. The proposed technique only demands the surface of the solid as input data. The key of this method lies in obtaining a volumetric parameterization between the solid and a simple parametric domain. To do that, an adaptive tetrahedral mesh of the parametric domain is isomorphically transformed onto the solid by applying the meccano method. The control points of the trivariate T-spline are calculated by imposing the interpolation conditions on points situated both on the inner and on the surface of the solid...

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Congresos y conferencias

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]We present advances of the meccano method for T-spline modelling and analysis of complex geometries. We consider a planar domain composed by several irregular sub-domains. These sub-regions are defined by their boundaries and can represent different materials. The bivariate T-spline representation of the whole physical domain is constructed from a square. In this procedure, a T-mesh optimization method is crucial. We show results of an elliptic problem by using a quadtree local T-mesh refinement technique…

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]We have recently introduced a new strategy, based on the meccano method [1, 2], to construct a T-spline parameterization of 2D and 3D geometries for the application of iso geometric analysis [3, 4]. The proposed method only demands a boundary representation of the geometry as input data. The algorithm obtains, as a result, high quality parametric transformation between the objects and the parametric domain, i.e. the meccano. The key of the method lies in de_ning an isomorphic transformation between the parametric and physical T-mesh _nding the optimal position of the interior nodes, once the meccano boundary nodes are mapped to the boundary of the physical domain

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]The authors have recently introduced the meccano method for tetrahedral mesh generation and volume parameterization of solids. In this paper, we present advances of the method for T-spline modelling and analysis of complex geometries. We consider a planar domain composed by several irregular sub-domains. These sub-regions are defined by their boundaries and can represent different materials. The bivariate T-spline representation of the whole physical domain is constructed from a square. In this procedure, a T-mesh optimization method is crucial. We show results of an elliptic problem by using a quadtree local T-mesh refinement technique…

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]We present a new method, based on the idea of the meccano method and a novel T-mesh optimization procedure, to construct a T-spline parameterization of 2D geometries for the application of isogeometric analysis. The proposed method only demands a boundary representation of the geometry as input data. The algorithm obtains, as a result, high quality parametric transformation between 2D objects and the parametric domain, the unit square. First, we define a parametric mapping between the input boundary of the object and the boundary of the parametric domain. Then, we build a T-mesh adapted to the geometric singularities of the domain in order to preserve the features of the object boundary with a desired tolerance…

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Among the experimental methods commonly used to define the behaviour of a full scale system, dynamic tests are the most complete and efficient procedures. A dynamic test is an experimental process, which would define a set of characteristic parameters of the dynamic behaviour of the system, such as natural frequencies of the structure, mode shapes and the corresponding modal damping values associated. An assessment of these modal characteristics can be used both to verify the theoretical assumptions of the project, to monitor the performance of the structural system during its operational use. The thesis is structured in the following chapters: The first introductive chapter recalls some basic notions of dynamics of structure, focusing the discussion on the problem of systems with multiply degrees of freedom (MDOF), which can represent a generic real system under study, when it is excited with harmonic force or in free vibration. The second chapter is entirely centred on to the problem of dynamic identification process of a structure, if it is subjected to an experimental test in forced vibrations. It first describes the construction of FRF through classical FFT of the recorded signal. A different method, also in the frequency domain, is subsequently introduced; it allows accurately to compute the FRF using the geometric characteristics of the ellipse that represents the direct input-output comparison. The two methods are compared and then the attention is focused on some advantages of the proposed methodology. The third chapter focuses on the study of real structures when they are subjected to experimental test, where the force is not known, like in an ambient or impact test. In this analysis we decided to use the CWT, which allows a simultaneous investigation in the time and frequency domain of a generic signal x(t). The CWT is first introduced to process free oscillations, with excellent results both in terms of frequencies, dampings and vibration modes. The application in the case of ambient vibrations defines accurate modal parameters of the system, although on the damping some important observations should be made. The fourth chapter is still on the problem of post processing data acquired after a vibration test, but this time through the application of discrete wavelet transform (DWT). In the first part the results obtained by the DWT are compared with those obtained by the application of CWT. Particular attention is given to the use of DWT as a tool for filtering the recorded signal, in fact in case of ambient vibrations the signals are often affected by the presence of a significant level of noise. The fifth chapter focuses on another important aspect of the identification process: the model updating. In this chapter, starting from the modal parameters obtained from some environmental vibration tests, performed by the University of Porto in 2008 and the University of Sheffild on the Humber Bridge in England, a FE model of the bridge is defined, in order to define what type of model is able to capture more accurately the real dynamic behaviour of the bridge. The sixth chapter outlines the necessary conclusions of the presented research. They concern the application of a method in the frequency domain in order to evaluate the modal parameters of a structure and its advantages, the advantages in applying a procedure based on the use of wavelet transforms in the process of identification in tests with unknown input and finally the problem of 3D modeling of systems with many degrees of freedom and with different types of uncertainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Within this PhD thesis several methods were developed and validated which can find applicationare suitable for environmental sample and material science and should be applicable for monitoring of particular radionuclides and the analysis of the chemical composition of construction materials in the frame of ESS project. The study demonstrated that ICP-MS is a powerful analytical technique for ultrasensitive determination of 129I, 90Sr and lanthanides in both artificial and environmental samples such as water and soil. In particular ICP-MS with collision cell allows measuring extremely low isotope ratios of iodine. It was demonstrated that isotope ratios of 129I/127I as low as 10-7 can be measured with an accuracy and precision suitable for distinguishing sample origins. ICP-MS with collision cell, in particular in combination with cool plasma conditions, reduces the influence of isobaric interferences on m/z = 90 and is therefore well-suited for 90Sr analysis in water samples. However, the applied ICP-CC-QMS in this work is limited for the measurement of 90Sr due to the tailing of 88Sr+ and in particular Daly detector noise. Hyphenation of capillary electrophoresis with ICP-MS was shown to resolve atomic ions of all lanthanides and polyatomic interferences. The elimination of polyatomic and isobaric ICP-MS interferences was accomplished without compromising the sensitivity by the use of a high resolution mode as available on ICP-SFMS. Combination of laser ablation with ICP-MS allowed direct micro and local uranium isotope ratio measurements at the ultratrace concentrations on the surface of biological samples. In particular, the application of a cooled laser ablation chamber improves the precision and accuracy of uranium isotopic ratios measurements in comparison to the non-cooled laser ablation chamber by up to one order of magnitude. In order to reduce the quantification problem, a mono gas on-line solution-based calibration was built based on the insertion of a microflow nebulizer DS-5 directly into the laser ablation chamber. A micro local method to determine the lateral element distribution on NiCrAlY-based alloy and coating after oxidation in air was tested and validated. Calibration procedures involving external calibration, quantification by relative sensitivity coefficients (RSCs) and solution-based calibration were investigated. The analytical method was validated by comparison of the LA-ICP-MS results with data acquired by EDX.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bioinformatics, in the last few decades, has played a fundamental role to give sense to the huge amount of data produced. Obtained the complete sequence of a genome, the major problem of knowing as much as possible of its coding regions, is crucial. Protein sequence annotation is challenging and, due to the size of the problem, only computational approaches can provide a feasible solution. As it has been recently pointed out by the Critical Assessment of Function Annotations (CAFA), most accurate methods are those based on the transfer-by-homology approach and the most incisive contribution is given by cross-genome comparisons. In the present thesis it is described a non-hierarchical sequence clustering method for protein automatic large-scale annotation, called “The Bologna Annotation Resource Plus” (BAR+). The method is based on an all-against-all alignment of more than 13 millions protein sequences characterized by a very stringent metric. BAR+ can safely transfer functional features (Gene Ontology and Pfam terms) inside clusters by means of a statistical validation, even in the case of multi-domain proteins. Within BAR+ clusters it is also possible to transfer the three dimensional structure (when a template is available). This is possible by the way of cluster-specific HMM profiles that can be used to calculate reliable template-to-target alignments even in the case of distantly related proteins (sequence identity < 30%). Other BAR+ based applications have been developed during my doctorate including the prediction of Magnesium binding sites in human proteins, the ABC transporters superfamily classification and the functional prediction (GO terms) of the CAFA targets. Remarkably, in the CAFA assessment, BAR+ placed among the ten most accurate methods. At present, as a web server for the functional and structural protein sequence annotation, BAR+ is freely available at http://bar.biocomp.unibo.it/bar2.0.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the years the Differential Quadrature (DQ) method has distinguished because of its high accuracy, straightforward implementation and general ap- plication to a variety of problems. There has been an increase in this topic by several researchers who experienced significant development in the last years. DQ is essentially a generalization of the popular Gaussian Quadrature (GQ) used for numerical integration functions. GQ approximates a finite in- tegral as a weighted sum of integrand values at selected points in a problem domain whereas DQ approximate the derivatives of a smooth function at a point as a weighted sum of function values at selected nodes. A direct appli- cation of this elegant methodology is to solve ordinary and partial differential equations. Furthermore in recent years the DQ formulation has been gener- alized in the weighting coefficients computations to let the approach to be more flexible and accurate. As a result it has been indicated as Generalized Differential Quadrature (GDQ) method. However the applicability of GDQ in its original form is still limited. It has been proven to fail for problems with strong material discontinuities as well as problems involving singularities and irregularities. On the other hand the very well-known Finite Element (FE) method could overcome these issues because it subdivides the computational domain into a certain number of elements in which the solution is calculated. Recently, some researchers have been studying a numerical technique which could use the advantages of the GDQ method and the advantages of FE method. This methodology has got different names among each research group, it will be indicated here as Generalized Differential Quadrature Finite Element Method (GDQFEM).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Finite element techniques for solving the problem of fluid-structure interaction of an elastic solid material in a laminar incompressible viscous flow are described. The mathematical problem consists of the Navier-Stokes equations in the Arbitrary Lagrangian-Eulerian formulation coupled with a non-linear structure model, considering the problem as one continuum. The coupling between the structure and the fluid is enforced inside a monolithic framework which computes simultaneously for the fluid and the structure unknowns within a unique solver. We used the well-known Crouzeix-Raviart finite element pair for discretization in space and the method of lines for discretization in time. A stability result using the Backward-Euler time-stepping scheme for both fluid and solid part and the finite element method for the space discretization has been proved. The resulting linear system has been solved by multilevel domain decomposition techniques. Our strategy is to solve several local subproblems over subdomain patches using the Schur-complement or GMRES smoother within a multigrid iterative solver. For validation and evaluation of the accuracy of the proposed methodology, we present corresponding results for a set of two FSI benchmark configurations which describe the self-induced elastic deformation of a beam attached to a cylinder in a laminar channel flow, allowing stationary as well as periodically oscillating deformations, and for a benchmark proposed by COMSOL multiphysics where a narrow vertical structure attached to the bottom wall of a channel bends under the force due to both viscous drag and pressure. Then, as an example of fluid-structure interaction in biomedical problems, we considered the academic numerical test which consists in simulating the pressure wave propagation through a straight compliant vessel. All the tests show the applicability and the numerical efficiency of our approach to both two-dimensional and three-dimensional problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die lösliche Epoxidhydrolase (sEH) gehört zur Familie der Epoxidhydrolase-Enzyme. Die Rolle der sEH besteht klassischerweise in der Detoxifikation, durch Umwandlung potenziell schädlicher Epoxide in deren unschädliche Diol-Form. Hauptsächlich setzt die sEH endogene, der Arachidonsäure verwandte Signalmoleküle, wie beispielsweise die Epoxyeicosatrienoic acid, zu den entsprechenden Diolen um. Daher könnte die sEH als ein Zielenzym in der Therapie von Bluthochdruck und Entzündungen sowie diverser anderer Erkrankungen eingesetzt werden. rnDie sEH ist ein Homodimer, in dem jede Untereinheit aus zwei Domänen aufgebaut ist. Das katalytische Zentrum der Epoxidhydrolaseaktivität befindet sich in der 35 kD großen C-terminalen Domäne. Dieser Bereich der sEH s wurde bereits im Detail untersucht und nahezu alle katalytischen Eigenschaften des Enzyms sowie deren dazugehörige Funktionen sind in Zusammenhang mit dieser Domäne bekannt. Im Gegensatz dazu ist über die 25 kD große N-terminale Domäne wenig bekannt. Die N-terminale Domäne der sEH wird zur Haloacid Dehalogenase (HAD) Superfamilie von Hydrolasen gezählt, jedoch war die Funktion dieses N-terminal Domäne lange ungeklärt. Wir haben in unserer Arbeitsgruppe zum ersten Mal zeigen können, dass die sEH in Säugern ein bifunktionelles Enzym ist, welches zusätzlich zur allgemein bekannten Enzymaktivität im C-terminalen Bereich eine weitere enzymatische Funktion mit Mg2+-abhängiger Phosphataseaktivität in der N-terminalen Domäne aufweist. Aufgrund der Homologie der N-terminalen Domäne mit anderen Enzymen der HAD Familie wird für die Ausübung der Phosphatasefunktion (Dephosphorylierung) eine Reaktion in zwei Schritten angenommen.rnUm den katalytischen Mechanismus der Dephosphorylierung weiter aufzuklären, wurden biochemische Analysen der humanen sEH Phosphatase durch Generierung von Mutationen im aktiven Zentrum mittels ortsspezifischer Mutagenese durchgeführt. Hiermit sollten die an der katalytischen Aktivität beteiligten Aminosäurereste im aktiven Zentrum identifiziert und deren Rolle bei der Dephosphorylierung spezifiziert werden. rnrnAuf Basis der strukturellen und möglichen funktionellen Ähnlichkeiten der sEH und anderen Mitgliedern der HAD Superfamilie wurden Aminosäuren (konservierte und teilweise konservierte Aminosäuren) im aktiven Zentrum der sEH Phosphatase-Domäne als Kandidaten ausgewählt.rnVon den Phosphatase-Domäne bildenden Aminosäuren wurden acht ausgewählt (Asp9 (D9), Asp11 (D11), Thr123 (T123), Asn124 (N124), Lys160 (K160), Asp184 (D184), Asp185 (D185), Asn189 (N189)), die mittels ortsspezifischer Mutagenese durch nicht funktionelle Aminosäuren ausgetauscht werden sollten. Dazu wurde jede der ausgewählten Aminosäuren durch mindestens zwei alternative Aminosäuren ersetzt: entweder durch Alanin oder durch eine Aminosäure ähnlich der im Wildtyp-Enzym. Insgesamt wurden 18 verschiedene rekombinante Klone generiert, die für eine mutante sEH Phosphatase Domäne kodieren, in dem lediglich eine Aminosäure gegenüber dem Wildtyp-Enzym ersetzt wurde. Die 18 Mutanten sowie das Wildtyp (Sequenz der N-terminalen Domäne ohne Mutation) wurden in einem Expressionsvektor in E.coli kloniert und die Nukleotidsequenz durch Restriktionsverdau sowie Sequenzierung bestätigt. Die so generierte N-terminale Domäne der sEH (25kD Untereinheit) wurde dann mittels Metallaffinitätschromatographie erfolgreich aufgereinigt und auf Phosphataseaktivität gegenüber des allgemeinen Substrats 4-Nitophenylphosphat getestet. Diejenigen Mutanten, die Phosphataseaktivität zeigten, wurden anschließend kinetischen Tests unterzogen. Basiered auf den Ergebnissen dieser Untersuchungen wurden kinetische Parameter mittels vier gut etablierter Methoden berechnet und die Ergebnisse mit der „direct linear blot“ Methode interpretiert. rnDie Ergebnisse zeigten, dass die meisten der 18 generierten Mutanten inaktiv waren oder einen Großteil der Enzymaktivität (Vmax) gegenüber dem Wildtyp verloren (WT: Vmax=77.34 nmol-1 mg-1 min). Dieser Verlust an Enzymaktivität ließ sich nicht durch einen Verlust an struktureller Integrität erklären, da der Wildtyp und die mutanten Proteine in der Chromatographie das gleiche Verhalten zeigten. Alle Aminosäureaustausche Asp9 (D9), Lys160 (K160), Asp184 (D184) und Asn189 (N189) führten zum kompletten Verlust der Phosphataseaktivität, was auf deren katalytische Funktion im N-terminalen Bereich der sEH hindeutet. Bei einem Teil der Aminosäureaustausche die für Asp11 (D11), Thr123 (T123), Asn124 (N124) und Asn185 (D185) durchgeführt wurden, kam es, verglichen mit dem Wildtyp, zu einer starken Reduktion der Phosphataseaktivität, die aber dennoch für die einzelnen Proteinmutanten in unterschiedlichem Ausmaß zu messen war (2 -10% and 40% of the WT enzyme activity). Zudem zeigten die Mutanten dieser Gruppe veränderte kinetische Eigenschaften (Vmax allein oder Vmax und Km). Dabei war die kinetische Analyse des Mutanten Asp11  Asn aufgrund der nur bei dieser Mutanten detektierbaren starken Vmax Reduktion (8.1 nmol-1 mg-1 min) und einer signifikanten Reduktion der Km (Asp11: Km=0.54 mM, WT: Km=1.3 mM), von besonderem Interesse und impliziert eine Rolle von Asp11 (D11) im zweiten Schritt der Hydrolyse des katalytischen Zyklus.rnZusammenfassend zeigen die Ergebnisse, dass alle in dieser Arbeit untersuchten Aminosäuren für die Phosphataseaktivität der sEH nötig sind und das aktive Zentrum der sEH Phosphatase im N-terminalen Bereich des Enzyms bilden. Weiterhin tragen diese Ergebnisse zur Aufklärung der potenziellen Rolle der untersuchten Aminosäuren bei und unterstützen die Hypothese, dass die Dephosphorylierungsreaktion in zwei Schritten abläuft. Somit ist ein kombinierter Reaktionsmechanismus, ähnlich denen anderer Enzyme der HAD Familie, für die Ausübung der Dephosphorylierungsfunktion denkbar. Diese Annahme wird gestützt durch die 3D-Struktur der N-terminalen Domäne, den Ergebnissen dieser Arbeit sowie Resultaten weiterer biochemischer Analysen. Der zweistufige Mechanismus der Dephosphorylierung beinhaltet einen nukleophilen Angriff des Substratphosphors durch das Nukleophil Asp9 (D9) des aktiven Zentrums unter Bildung eines Acylphosphat-Enzym-Zwischenprodukts, gefolgt von der anschließenden Freisetzung des dephosphorylierten Substrats. Im zweiten Schritt erfolgt die Hydrolyse des Enzym-Phosphat-Zwischenprodukts unterstützt durch Asp11 (D11), und die Freisetzung der Phosphatgruppe findet statt. Die anderen untersuchten Aminosäuren sind an der Bindung von Mg 2+ und/oder Substrat beteiligt. rnMit Hilfe dieser Arbeit konnte der katalytischen Mechanismus der sEH Phosphatase weiter aufgeklärt werden und wichtige noch zu untersuchende Fragestellungen, wie die physiologische Rolle der sEH Phosphatase, deren endogene physiologische Substrate und der genaue Funktionsmechanismus als bifunktionelles Enzym (die Kommunikation der zwei katalytischen Einheiten des Enzyms) wurden aufgezeigt und diskutiert.rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coronary late stent thrombosis, a rare but devastating complication, remains an important concern in particular with the increasing use of drug-eluting stents. Notably, pathological studies have indicated that the proportion of uncovered coronary stent struts represents the best morphometric predictor of late stent thrombosis. Intracoronary optical frequency domain imaging (OFDI), a novel second-generation optical coherence tomography (OCT)-derived imaging method, may allow rapid imaging for the detection of coronary stent strut coverage with a markedly higher precision when compared with intravascular ultrasound, due to a microscopic resolution (axial approximately 10-20 microm), and at a substantially increased speed of image acquisition when compared with first-generation time-domain OCT. However, a histological validation of coronary OFDI for the evaluation of stent strut coverage in vivo is urgently needed. Hence, the present study was designed to evaluate the capacity of coronary OFDI by electron (SEM) and light microscopy (LM) analysis to detect and evaluate stent strut coverage in a porcine model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Osteoarticular allograft transplantation is a popular treatment method in wide surgical resections with large defects. For this reason hospitals are building bone data banks. Performing the optimal allograft selection on bone banks is crucial to the surgical outcome and patient recovery. However, current approaches are very time consuming hindering an efficient selection. We present an automatic method based on registration of femur bones to overcome this limitation. We introduce a new regularization term for the log-domain demons algorithm. This term replaces the standard Gaussian smoothing with a femur specific polyaffine model. The polyaffine femur model is constructed with two affine (femoral head and condyles) and one rigid (shaft) transformation. Our main contribution in this paper is to show that the demons algorithm can be improved in specific cases with an appropriate model. We are not trying to find the most optimal polyaffine model of the femur, but the simplest model with a minimal number of parameters. There is no need to optimize for different number of regions, boundaries and choice of weights, since this fine tuning will be done automatically by a final demons relaxation step with Gaussian smoothing. The newly developed synthesis approach provides a clear anatomically motivated modeling contribution through the specific three component transformation model, and clearly shows a performance improvement (in terms of anatomical meaningful correspondences) on 146 CT images of femurs compared to a standard multiresolution demons. In addition, this simple model improves the robustness of the demons while preserving its accuracy. The ground truth are manual measurements performed by medical experts.