793 resultados para Convex
Resumo:
The wheel - rail contact analysis plays a fundamental role in the multibody modeling of railway vehicles. A good contact model must provide an accurate description of the global contact phenomena (contact forces and torques, number and position of the contact points) and of the local contact phenomena (position and shape of the contact patch, stresses and displacements). The model has also to assure high numerical efficiency (in order to be implemented directly online within multibody models) and a good compatibility with commercial multibody software (Simpack Rail, Adams Rail). The wheel - rail contact problem has been discussed by several authors and many models can be found in the literature. The contact models can be subdivided into two different categories: the global models and the local (or differential) models. Currently, as regards the global models, the main approaches to the problem are the so - called rigid contact formulation and the semi – elastic contact description. The rigid approach considers the wheel and the rail as rigid bodies. The contact is imposed by means of constraint equations and the contact points are detected during the dynamic simulation by solving the nonlinear algebraic differential equations associated to the constrained multibody system. Indentation between the bodies is not permitted and the normal contact forces are calculated through the Lagrange multipliers. Finally the Hertz’s and the Kalker’s theories allow to evaluate the shape of the contact patch and the tangential forces respectively. Also the semi - elastic approach considers the wheel and the rail as rigid bodies. However in this case no kinematic constraints are imposed and the indentation between the bodies is permitted. The contact points are detected by means of approximated procedures (based on look - up tables and simplifying hypotheses on the problem geometry). The normal contact forces are calculated as a function of the indentation while, as in the rigid approach, the Hertz’s and the Kalker’s theories allow to evaluate the shape of the contact patch and the tangential forces. Both the described multibody approaches are computationally very efficient but their generality and accuracy turn out to be often insufficient because the physical hypotheses behind these theories are too restrictive and, in many circumstances, unverified. In order to obtain a complete description of the contact phenomena, local (or differential) contact models are needed. In other words wheel and rail have to be considered elastic bodies governed by the Navier’s equations and the contact has to be described by suitable analytical contact conditions. The contact between elastic bodies has been widely studied in literature both in the general case and in the rolling case. Many procedures based on variational inequalities, FEM techniques and convex optimization have been developed. This kind of approach assures high generality and accuracy but still needs very large computational costs and memory consumption. Due to the high computational load and memory consumption, referring to the current state of the art, the integration between multibody and differential modeling is almost absent in literature especially in the railway field. However this integration is very important because only the differential modeling allows an accurate analysis of the contact problem (in terms of contact forces and torques, position and shape of the contact patch, stresses and displacements) while the multibody modeling is the standard in the study of the railway dynamics. In this thesis some innovative wheel – rail contact models developed during the Ph. D. activity will be described. Concerning the global models, two new models belonging to the semi – elastic approach will be presented; the models satisfy the following specifics: 1) the models have to be 3D and to consider all the six relative degrees of freedom between wheel and rail 2) the models have to consider generic railway tracks and generic wheel and rail profiles 3) the models have to assure a general and accurate handling of the multiple contact without simplifying hypotheses on the problem geometry; in particular the models have to evaluate the number and the position of the contact points and, for each point, the contact forces and torques 4) the models have to be implementable directly online within the multibody models without look - up tables 5) the models have to assure computation times comparable with those of commercial multibody software (Simpack Rail, Adams Rail) and compatible with RT and HIL applications 6) the models have to be compatible with commercial multibody software (Simpack Rail, Adams Rail). The most innovative aspect of the new global contact models regards the detection of the contact points. In particular both the models aim to reduce the algebraic problem dimension by means of suitable analytical techniques. This kind of reduction allows to obtain an high numerical efficiency that makes possible the online implementation of the new procedure and the achievement of performance comparable with those of commercial multibody software. At the same time the analytical approach assures high accuracy and generality. Concerning the local (or differential) contact models, one new model satisfying the following specifics will be presented: 1) the model has to be 3D and to consider all the six relative degrees of freedom between wheel and rail 2) the model has to consider generic railway tracks and generic wheel and rail profiles 3) the model has to assure a general and accurate handling of the multiple contact without simplifying hypotheses on the problem geometry; in particular the model has to able to calculate both the global contact variables (contact forces and torques) and the local contact variables (position and shape of the contact patch, stresses and displacements) 4) the model has to be implementable directly online within the multibody models 5) the model has to assure high numerical efficiency and a reduced memory consumption in order to achieve a good integration between multibody and differential modeling (the base for the local contact models) 6) the model has to be compatible with commercial multibody software (Simpack Rail, Adams Rail). In this case the most innovative aspects of the new local contact model regard the contact modeling (by means of suitable analytical conditions) and the implementation of the numerical algorithms needed to solve the discrete problem arising from the discretization of the original continuum problem. Moreover, during the development of the local model, the achievement of a good compromise between accuracy and efficiency turned out to be very important to obtain a good integration between multibody and differential modeling. At this point the contact models has been inserted within a 3D multibody model of a railway vehicle to obtain a complete model of the wagon. The railway vehicle chosen as benchmark is the Manchester Wagon the physical and geometrical characteristics of which are easily available in the literature. The model of the whole railway vehicle (multibody model and contact model) has been implemented in the Matlab/Simulink environment. The multibody model has been implemented in SimMechanics, a Matlab toolbox specifically designed for multibody dynamics, while, as regards the contact models, the CS – functions have been used; this particular Matlab architecture allows to efficiently connect the Matlab/Simulink and the C/C++ environment. The 3D multibody model of the same vehicle (this time equipped with a standard contact model based on the semi - elastic approach) has been then implemented also in Simpack Rail, a commercial multibody software for railway vehicles widely tested and validated. Finally numerical simulations of the vehicle dynamics have been carried out on many different railway tracks with the aim of evaluating the performances of the whole model. The comparison between the results obtained by the Matlab/ Simulink model and those obtained by the Simpack Rail model has allowed an accurate and reliable validation of the new contact models. In conclusion to this brief introduction to my Ph. D. thesis, we would like to thank Trenitalia and the Regione Toscana for the support provided during all the Ph. D. activity. Moreover we would also like to thank the INTEC GmbH, the society the develops the software Simpack Rail, with which we are currently working together to develop innovative toolboxes specifically designed for the wheel rail contact analysis.
Resumo:
This thesis deals with an investigation of Decomposition and Reformulation to solve Integer Linear Programming Problems. This method is often a very successful approach computationally, producing high-quality solutions for well-structured combinatorial optimization problems like vehicle routing, cutting stock, p-median and generalized assignment . However, until now the method has always been tailored to the specific problem under investigation. The principal innovation of this thesis is to develop a new framework able to apply this concept to a generic MIP problem. The new approach is thus capable of auto-decomposition and autoreformulation of the input problem applicable as a resolving black box algorithm and works as a complement and alternative to the normal resolving techniques. The idea of Decomposing and Reformulating (usually called in literature Dantzig and Wolfe Decomposition DWD) is, given a MIP, to convexify one (or more) subset(s) of constraints (slaves) and working on the partially convexified polyhedron(s) obtained. For a given MIP several decompositions can be defined depending from what sets of constraints we want to convexify. In this thesis we mainly reformulate MIPs using two sets of variables: the original variables and the extended variables (representing the exponential extreme points). The master constraints consist of the original constraints not included in any slaves plus the convexity constraint(s) and the linking constraints(ensuring that each original variable can be viewed as linear combination of extreme points of the slaves). The solution procedure consists of iteratively solving the reformulated MIP (master) and checking (pricing) if a variable of reduced costs exists, and in which case adding it to the master and solving it again (columns generation), or otherwise stopping the procedure. The advantage of using DWD is that the reformulated relaxation gives bounds stronger than the original LP relaxation, in addition it can be incorporated in a Branch and bound scheme (Branch and Price) in order to solve the problem to optimality. If the computational time for the pricing problem is reasonable this leads in practice to a stronger speed up in the solution time, specially when the convex hull of the slaves is easy to compute, usually because of its special structure.
Resumo:
The present thesis is a contribution to the multi-variable theory of Bergman and Hardy Toeplitz operators on spaces of holomorphic functions over finite and infinite dimensional domains. In particular, we focus on certain spectral invariant Frechet operator algebras F closely related to the local symbol behavior of Toeplitz operators in F. We summarize results due to B. Gramsch et.al. on the construction of Psi_0- and Psi^*-algebras in operator algebras and corresponding scales of generalized Sobolev spaces using commutator methods, generalized Laplacians and strongly continuous group actions. In the case of the Segal-Bargmann space H^2(C^n,m) of Gaussian square integrable entire functions on C^n we determine a class of vector-fields Y(C^n) supported in complex cones K. Further, we require that for any finite subset V of Y(C^n) the Toeplitz projection P is a smooth element in the Psi_0-algebra constructed by commutator methods with respect to V. As a result we obtain Psi_0- and Psi^*-operator algebras F localized in cones K. It is an immediate consequence that F contains all Toeplitz operators T_f with a symbol f of certain regularity in an open neighborhood of K. There is a natural unitary group action on H^2(C^n,m) which is induced by weighted shifts and unitary groups on C^n. We examine the corresponding Psi^*-algebra A of smooth elements in Toeplitz-C^*-algebras. Among other results sufficient conditions on the symbol f for T_f to belong to A are given in terms of estimates on its Berezin-transform. Local aspects of the Szegö projection P_s on the Heisenbeg group and the corresponding Toeplitz operators T_f with symbol f are studied. In this connection we apply a result due to Nagel and Stein which states that for any strictly pseudo-convex domain U the projection P_s is a pseudodifferential operator of exotic type (1/2, 1/2). The second part of this thesis is devoted to the infinite dimensional theory of Bergman and Hardy spaces and the corresponding Toeplitz operators. We give a new proof of a result observed by Boland and Waelbroeck. Namely, that the space of all holomorphic functions H(U) on an open subset U of a DFN-space (dual Frechet nuclear space) is a FN-space (Frechet nuclear space) equipped with the compact open topology. Using the nuclearity of H(U) we obtain Cauchy-Weil-type integral formulas for closed subalgebras A in H_b(U), the space of all bounded holomorphic functions on U, where A separates points. Further, we prove the existence of Hardy spaces of holomorphic functions on U corresponding to the abstract Shilov boundary S_A of A and with respect to a suitable boundary measure on S_A. Finally, for a domain U in a DFN-space or a polish spaces we consider the symmetrizations m_s of measures m on U by suitable representations of a group G in the group of homeomorphisms on U. In particular,in the case where m leads to Bergman spaces of holomorphic functions on U, the group G is compact and the representation is continuous we show that m_s defines a Bergman space of holomorphic functions on U as well. This leads to unitary group representations of G on L^p- and Bergman spaces inducing operator algebras of smooth elements related to the symmetries of U.
Resumo:
In this thesis, we investigated the evaporation of sessile microdroplets on different solid substrates. Three major aspects were studied: the influence of surface hydrophilicity and heterogeneity on the evaporation dynamics for an insoluble solid substrate, the influence of external process parameters and intrinsic material properties on microstructuring of soluble polymer substrates and the influence of an increased area to volume ratio in a microfluidic capillary, when evaporation is hindered. In the first part, the evaporation dynamics of pure sessile water drops on smooth self-assembled monolayers (SAMs) of thiols or disulfides on gold on mica was studied. With increasing surface hydrophilicity the drop stayed pinned longer. Thus, the total evaporation time of a given initial drop volume was shorter, since the drop surface, through which the evaporation occurs, stays longer large. Usually, for a single drop the volume decreased linearly with t1.5, t being the evaporation time, for a diffusion-controlled evaporation process. However, when we measured the total evaporation time, ttot, for multiple droplets with different initial volumes, V0, we found a scaling of the form V0 = attotb. The more hydrophilic the substrate was, the more showed the scaling exponent a tendency to an increased value up to 1.6. This can be attributed to an increasing evaporation rate through a thin water layer in the vicinity of the drop. Under the assumption of a constant temperature at the substrate surface a cooling of the droplet and thus a decreased evaporation rate could be excluded as a reason for the different scaling exponent by simulations performed by F. Schönfeld at the IMM, Mainz. In contrast, for a hairy surface, made of dialkyldisulfide SAMs with different chain lengths and a 1:1 mixture of hydrophilic and hydrophobic end groups (hydroxy versus methyl group), the scaling exponent was found to be ~ 1.4. It increased to ~ 1.5 with increasing hydrophilicity. A reason for this observation can only be speculated: in the case of longer hydrophobic alkyl chains the formation of an air layer between substrate and surface might be favorable. Thus, the heat transport to the substrate might be reduced, leading to a stronger cooling and thus decreased evaporation rate. In the second part, the microstructuring of polystyrene surfaces by drops of toluene, a good solvent, was investigated. For this a novel deposition technique was developed, with which the drop can be deposited with a syringe. The polymer substrate is lying on a motorized table, which picks up the pendant drop by an upward motion until a liquid bridge is formed. A consecutive downward motion of the table after a variable delay, i.e. the contact time between drop and polymer, leads to the deposition of the droplet, which can evaporate. The resulting microstructure is investigated in dependence of the processes parameters, i.e. the approach and the retraction speed of the substrate and the delay between them, and in dependence of the intrinsic material properties, i.e. the molar mass and the type of the polymer/solvent system. The principal equivalence with the microstructuring by the ink-jet technique was demonstrated. For a high approach and retraction speed of 9 mm/s and no delay between them, a concave microtopology was observed. In agreement with the literature, this can be explained by a flow of solvent and the dissolved polymer to the rim of the pinned droplet, where polymer is accumulated. This effect is analogue to the well-known formation of ring-like stains after the evaporation of coffee drops (coffee-stain effect). With decreasing retraction speed down to 10 µm/s the resulting surface topology changes from concave to convex. This can be explained with the increasing dissolution of polymer into the solvent drop prior to the evaporation. If the polymer concentration is high enough, gelation occurs instead of a flow to the rim and the shape of the convex droplet is received. With increasing delay time from below 0 ms to 1s the depth of the concave microwells decreases from 4.6 µm to 3.2 µm. However, a convex surface topology could not be obtained, since for longer delay times the polymer sticks to the tip of the syringe. Thus, by changing the delay time a fine-tuning of the concave structure is accomplished, while by changing the retraction speed a principal change of the microtopolgy can be achieved. We attribute this to an additional flow inside the liquid bridge, which enhanced polymer dissolution. Even if the pendant drop is evaporating about 30 µm above the polymer surface without any contact (non-contact mode), concave structures were observed. Rim heights as high as 33 µm could be generated for exposure times of 20 min. The concave structure exclusively lay above the flat polymer surface outside the structure even after drying. This shows that toluene is taken up permanently. The increasing rim height, rh, with increasing exposure time to the solvent vapor obeys a diffusion law of rh = rh0 tn, with n in the range of 0.46 ~ 0.65. This hints at a non-Fickian swelling process. A detailed analysis showed that the rim height of the concave structure is modulated, unlike for the drop deposition. This is due to the local stress relaxation, which was initiated by the increasing toluene concentration in the extruded polymer surface. By altering the intrinsic material parameters i.e. the polymer molar mass and the polymer/solvent combination, several types of microstructures could be formed. With increasing molar mass from 20.9 kDa to 1.44 MDa the resulting microstructure changed from convex, to a structure with a dimple in the center, to concave, to finally an irregular structure. This observation can be explained if one assumes that the microstructuring is dominated by two opposing effects, a decreasing solubility with increasing polymer molar mass, but an increasing surface tension gradient leading to instabilities of Marangoni-type. Thus, a polymer with a low molar mass close or below the entanglement limit is subject to a high dissolution rate, which leads to fast gelation compared to the evaporation rate. This way a coffee-rim like effect is eliminated early and a convex structure results. For high molar masses the low dissolution rate and the low polymer diffusion might lead to increased surface tension gradients and a typical local pile-up of polymer is found. For intermediate polymer masses around 200 kDa, the dissolution and evaporation rate are comparable and the typical concave microtopology is found. This interpretation was supported by a quantitative estimation of the diffusion coefficient and the evaporation rate. For a different polymer/solvent system, polyethylmethacrylate (PEMA)/ethylacetate (EA), exclusively concave structures were found. Following the statements above this can be interpreted with a lower dissolution rate. At low molar masses the concentration of PEMA in EA most likely never reaches the gelation point. Thus, a concave instead of a convex structure occurs. At the end of this section, the optically properties of such microstructures for a potential application as microlenses are studied with laser scanning confocal microscopy. In the third part, the droplet was confined into a glass microcapillary to avoid evaporation. Since here, due to an increased area to volume ratio, the surface properties of the liquid and the solid walls became important, the influence of the surface hydrophilicity of the wall on the interfacial tension between two immiscible liquid slugs was investigated. For this a novel method for measuring the interfacial tension between the two liquids within the capillary was developed. This technique was demonstrated by measuring the interfacial tensions between slugs of pure water and standard solvents. For toluene, n-hexane and chloroform 36.2, 50.9 and 34.2 mN/m were measured at 20°C, which is in a good agreement with data from the literature. For a slug of hexane in contact with a slug of pure water containing ethanol in a concentration range between 0 and 70 (v/v %), a difference of up to 6 mN/m was found, when compared to commercial ring tensiometry. This discrepancy is still under debate.
Resumo:
This work contributes to the field of spatial economics by embracing three distinct modelling approaches, belonging to different strands of the theoretical literature. In the first chapter I present a theoretical model in which the changes in urban system’s degree of functional specialisation are linked to (i) firms’ organisational choices and firms’ location decisions. The interplay between firms’ internal communication/managing costs (between headquarters and production plants) and the cost of communicating with distant business services providers leads the transition process from an “integrated” urban system where each city hosts every different functions to a “functionally specialised” urban system where each city is either a primary business center (hosting advanced business services providers, a secondary business center or a pure manufacturing city and all this city-types coexist in equilibrium.The second chapter investigates the impact of free trade on welfare in a two-country world modelled as an international Hotelling duopoly with quadratic transport costs and asymmetric countries, where a negative environmental externality is associated with the consumption of the good produced in the smaller country. Countries’ relative sizes as well as the intensity of negative environmental externality affect potential welfare gains of trade liberalisation. The third chapter focuses on the paradox, by which, contrary to theoretical predictions, empirical evidence shows that a decrease in international transport costs causes an increase in foreign direct investments (FDIs). Here we propose an explanation to this apparent puzzle by exploiting an approach which delivers a continuum of Bertrand- Nash equilibria ranging above marginal cost pricing. In our setting, two Bertrand firms, supplying a homogeneous good with a convex cost function, enter the market of a foreign country. We show that allowing for a softer price competition may indeed more than offset the standard effect generated by a decrease in trade costs, thereby restoring FDI incentives.
Resumo:
Während der Glazialphasen kam es in den europäischen Mittelgebirgen bedingt durch extensive solifluidale Massenbewegungen zur Bildung von Deckschichten. Diese Deckschichten repräsentieren eine Mischung verschiedener Substrate, wie anstehendes Ausgangsgestein, äolische Depositionen und lokale Erzgänge. Die räumliche Ausdehnung der Metallkontaminationen verursacht durch kleinräumige Erzgänge wird durch die periglaziale Solifluktion verstärkt. Das Ziel der vorliegenden Untersuchung war a) den Zusammenhang zwischen den Reliefeigenschaften und den Ausprägungen der solifluidalen Deckschichten und Böden aufzuklären, sowie b) mittels Spurenelementgehalte und Blei-Isotopen-Verhältnisse als Eingangsdaten für Mischungsmodelle die Beitrage der einzelnen Substrate zum Ausgangsmaterial der Bodenbildung zu identifizieren und quantifizieren und c) die räumliche Verteilung von Blei (Pb) in Deckschichten, die über Bleierzgänge gewandert sind, untersucht, die Transportweite des erzbürtigen Bleis berechnet und die kontrollierenden Faktoren der Transportweite bestimmt werden. Sechs Transekte im südöstlichen Rheinischen Schiefergebirge, einschließlich der durch periglaziale Solifluktion entwickelten Böden, wurden untersucht. Die bodenkundliche Geländeaufnahme erfolgte nach AG Boden (2005). O, A, B und C-Horizontproben wurden auf ihre Spurenelementgehalte und teilweise auf ihre 206Pb/207Pb-Isotopenverhältnisse analysiert. Die steuernden Faktoren der Verteilung und Eigenschaften periglazialer Deckschichten sind neben der Petrographie, Reliefeigenschaften wie Exposition, Hangneigung, Hangposition und Krümmung. Die Reliefanalyse zeigt geringmächtige Deckschichten in divergenten, konvexen Hangbereichen bei gleichzeitig hohem Skelettgehalt. In konvergent, konkaven Hangbereichen nimmt die Deckschichtenmächtigkeit deutlich zu, bei gleichzeitig zunehmendem Lösslehm- und abnehmendem Skelettgehalt. Abhängig von den Reliefeigenschaften und -positionen reichen die ausgeprägten Bodentypen von sauren Braunerden bis hin zu Pseudogley-Parabraunerden. Des Weiteren kommen holozäne Kolluvien in eher untypischen Reliefpositionen wie langgestreckten, kaum geneigten Hangbereichen oder Mittelhangbereichen vor. Außer für Pb bewegen sich die Spurenelementgehalte im Rahmen niedriger Hintergrundgehalte. Die Pb-Gehalte liegen zwischen 20-135 mg kg-1. Abnehmende Spurenelementgehalte und Isotopensignaturen (206Pb/207Pb-Isotopenverhältnisse) von Pb zeigen, dass nahezu kein Pb aus atmosphärischen Depositionen in die B-Horizonte verlagert wurde. Eine Hauptkomponentenanalyse (PCA) der Spurenelementgehalte hat vier Hauptsubstratquellen der untersuchten B-Horizonte identifiziert (Tonschiefer, Löss, Laacher-See-Tephra [LST] und lokale Pb-Erzgänge). Mittels 3-Komponenten-Mischungsmodell, das Tonschiefer, Löss und LST einschloss, konnten, bis auf 10 Ausreißer, die Spurenelementgehalte aller 120 B-Horizontproben erklärt werden. Der Massenbeitrag des Pb-Erzes zur Substratmischung liegt bei <0,1%. Die räumliche Pb-Verteilung zeigt Bereiche lokaler Pb-Gehaltsmaxima hangaufwärtiger Pb-Erzgänge. Mittels eines 206Pb/207Pb-Isotopenverhältnis-Mischungsmodells konnten 14 Bereiche erhöhter lokaler Pb-Gehaltsmaxima ausgewiesen werden, die 76-100% erzbürtigen Bleis enthalten. Mit Hilfe eines Geographischen Informationssystems wurden die Transportweiten des erzbürtigen Bleis mit 30 bis 110 m bestimmt. Die steuerenden Faktoren der Transportweite sind dabei die Schluffkonzentration und die Vertikalkrümmung. Diese Untersuchung zeigt, dass Reliefeigenschaften und Reliefposition einen entscheidenden Einfluss auf die Ausprägung der Deckschichten und Böden im europäischen Mittelgebirgsbereich haben. Mischungsmodelle in Kombination mit Spurenelementanalysen und Isotopenverhältnissen stellen ein wichtiges Werkzeug zur Bestimmung der Beiträge der einzelnen Glieder in Bodensubstratmischungen dar. Außerdem können lokale Bleierzgänge die natürlichen Pb-Gehalte in Böden, entwickelt in periglazialen Deckschichten der letzten Vereisungsphase (Würm), bis über 100 m Entfernung erhöhen.
Resumo:
A major weakness of composite materials is that low-velocity impact, introduced accidentally during manufacture, operation or maintenance of the aircraft, may result in delaminations between the plies. Therefore, the first part of this study is focused on mechanics of curved laminates under impact. For this aim, the effect of preloading on impact response of curved composite laminates is considered. By applying the preload, the stress through the thickness and curvature of the laminates increased. The results showed that all impact parameters are varied significantly. For understanding the contribution rate of preloading and pre-stress on the obtained results another test is designed. The interesting phenomenon is that the preloading can decrease the damaged area when the curvature of the both specimens is the same. Finally the effect of curvature type, concave and convex, is investigated under impact loading. In the second part, a new composition of nanofibrous mats are developed to improve the efficiency of curved laminates under impact loading. Therefore, at first some fracture tests are conducted to consider the effect of Nylon 6,6, PCL, and their mixture on mode I and mode II fracture toughness. For this goal, nanofibers are electrospun and interleaved between mid-plane of laminate composite to conduct mode I and mode II tests. The results shows that efficiency of Nylon 6,6 is better than PCL in mode II, while the effect of PCL on fracture toughness of mode I is more. By mixing these nanofibers the shortage of the individual nanofibers is compensated and so the Nylon 6,6/PCL nanofibers could increased mode I and II fracture toughness. Then all these nanofibers are used between all layers of composite layers to investigate their effect on damaged area. The results showed that PCL could decrease the damaged area about 25% and Nylon 6,6 and mixed nanofibers about 50%.
Resumo:
Die vorliegende Arbeit untersucht das inverse Hindernisproblem der zweidimensionalen elektrischen Impedanztomographie (EIT) mit Rückstreudaten. Wir präsentieren und analysieren das mathematische Modell für Rückstreudaten, diskutieren das inverse Problem für einen einzelnen isolierenden oder perfekt leitenden Einschluss und stellen zwei Rekonstruktionsverfahren für das inverse Hindernisproblem mit Rückstreudaten vor. Ziel des inversen Hindernisproblems der EIT ist es, Inhomogenitäten (sogenannte Einschlüsse) der elektrischen Leitfähigkeit eines Körpers aus Strom-Spannungs-Messungen an der Körperoberfläche zu identifizieren. Für die Messung von Rückstreudaten ist dafür nur ein Paar aus an der Körperoberfläche nahe zueinander angebrachten Elektroden nötig, das zur Datenerfassung auf der Oberfläche entlang bewegt wird. Wir stellen ein mathematisches Modell für Rückstreudaten vor und zeigen, dass Rückstreudaten die Randwerte einer außerhalb der Einschlüsse holomorphen Funktion sind. Auf dieser Grundlage entwickeln wir das Konzept des konvexen Rückstreuträgers: Der konvexe Rückstreuträger ist eine Teilmenge der konvexen Hülle der Einschlüsse und kann daher zu deren Auffindung dienen. Wir stellen einen Algorithmus zur Berechnung des konvexen Rückstreuträgers vor und demonstrieren ihn an numerischen Beispielen. Ferner zeigen wir, dass ein einzelner isolierender Einschluss anhand seiner Rückstreudaten eindeutig identifizierbar ist. Der Beweis dazu beruht auf dem Riemann'schen Abbildungssatz für zweifach zusammenhängende Gebiete und dient als Grundlage für einen Rekonstruktionsalgorithmus, dessen Leistungsfähigkeit wir an verschiedenen Beispielen demonstrieren. Ein perfekt leitender Einschluss ist hingegen nicht immer aus seinen Rückstreudaten rekonstruierbar. Wir diskutieren, in welchen Fällen die eindeutige Identifizierung fehlschlägt und zeigen Beispiele für unterschiedliche perfekt leitende Einschlüsse mit gleichen Rückstreudaten.
Resumo:
Geometric packing problems may be formulated mathematically as constrained optimization problems. But finding a good solution is a challenging task. The more complicated the geometry of the container or the objects to be packed, the more complex the non-penetration constraints become. In this work we propose the use of a physics engine that simulates a system of colliding rigid bodies. It is a tool to resolve interpenetration conflicts and to optimize configurations locally. We develop an efficient and easy-to-implement physics engine that is specialized for collision detection and contact handling. In succession of the development of this engine a number of novel algorithms for distance calculation and intersection volume were designed and imple- mented, which are presented in this work. They are highly specialized to pro- vide fast responses for cuboids and triangles as input geometry whereas the concepts they are based on can easily be extended to other convex shapes. Especially noteworthy in this context is our ε-distance algorithm - a novel application that is not only very robust and fast but also compact in its im- plementation. Several state-of-the-art third party implementations are being presented and we show that our implementations beat them in runtime and robustness. The packing algorithm that lies on top of the physics engine is a Monte Carlo based approach implemented for packing cuboids into a container described by a triangle soup. We give an implementation for the SAE J1100 variant of the trunk packing problem. We compare this implementation to several established approaches and we show that it gives better results in faster time than these existing implementations.
Resumo:
The purpose of this study is to analyse the regularity of a differential operator, the Kohn Laplacian, in two settings: the Heisenberg group and the strongly pseudoconvex CR manifolds. The Heisenberg group is defined as a space of dimension 2n+1 with a product. It can be seen in two different ways: as a Lie group and as the boundary of the Siegel UpperHalf Space. On the Heisenberg group there exists the tangential CR complex. From this we define its adjoint and the Kohn-Laplacian. Then we obtain estimates for the Kohn-Laplacian and find its solvability and hypoellipticity. For stating L^p and Holder estimates, we talk about homogeneous distributions. In the second part we start working with a manifold M of real dimension 2n+1. We say that M is a CR manifold if some properties are satisfied. More, we say that a CR manifold M is strongly pseudoconvex if the Levi form defined on M is positive defined. Since we will show that the Heisenberg group is a model for the strongly pseudo-convex CR manifolds, we look for an osculating Heisenberg structure in a neighborhood of a point in M, and we want this structure to change smoothly from a point to another. For that, we define Normal Coordinates and we study their properties. We also examinate different Normal Coordinates in the case of a real hypersurface with an induced CR structure. Finally, we define again the CR complex, its adjoint and the Laplacian operator on M. We study these new operators showing subelliptic estimates. For that, we don't need M to be pseudo-complex but we ask less, that is, the Z(q) and the Y(q) conditions. This provides local regularity theorems for Laplacian and show its hypoellipticity on M.
Resumo:
In condensed matter systems, the interfacial tension plays a central role for a multitude of phenomena. It is the driving force for nucleation processes, determines the shape and structure of crystalline structures and is important for industrial applications. Despite its importance, the interfacial tension is hard to determine in experiments and also in computer simulations. While for liquid-vapor interfacial tensions there exist sophisticated simulation methods to compute the interfacial tension, current methods for solid-liquid interfaces produce unsatisfactory results.rnrnAs a first approach to this topic, the influence of the interfacial tension on nuclei is studied within the three-dimensional Ising model. This model is well suited because despite its simplicity, one can learn much about nucleation of crystalline nuclei. Below the so-called roughening temperature, nuclei in the Ising model are not spherical anymore but become cubic because of the anisotropy of the interfacial tension. This is similar to crystalline nuclei, which are in general not spherical but more like a convex polyhedron with flat facets on the surface. In this context, the problem of distinguishing between the two bulk phases in the vicinity of the diffuse droplet surface is addressed. A new definition is found which correctly determines the volume of a droplet in a given configuration if compared to the volume predicted by simple macroscopic assumptions.rnrnTo compute the interfacial tension of solid-liquid interfaces, a new Monte Carlo method called ensemble switch method'' is presented which allows to compute the interfacial tension of liquid-vapor interfaces as well as solid-liquid interfaces with great accuracy. In the past, the dependence of the interfacial tension on the finite size and shape of the simulation box has often been neglected although there is a nontrivial dependence on the box dimensions. As a consequence, one needs to systematically increase the box size and extrapolate to infinite volume in order to accurately predict the interfacial tension. Therefore, a thorough finite-size scaling analysis is established in this thesis. Logarithmic corrections to the finite-size scaling are motivated and identified, which are of leading order and therefore must not be neglected. The astounding feature of these logarithmic corrections is that they do not depend at all on the model under consideration. Using the ensemble switch method, the validity of a finite-size scaling ansatz containing the aforementioned logarithmic corrections is carefully tested and confirmed. Combining the finite-size scaling theory with the ensemble switch method, the interfacial tension of several model systems, ranging from the Ising model to colloidal systems, is computed with great accuracy.
Resumo:
OBJECTIVE: To retrospectively evaluate the craniofacial morphology of children with a complete unilateral cleft lip and palate treated with a 1-stage simultaneous cleft repair performed in the first year of life. METHODS: Cephalograms and extraoral profile photographs of 61 consecutively treated patients (42 boys, 19 girls) who had been operated on at 9.2 (SD, 2.0) months by a single experienced surgeon were analyzed at 11.4 (SD, 1.5) years. The noncleft control group comprised 81 children (43 boys and 38 girls) of the same ethnicity at the age of 10.4 (SD, 0.5) years. RESULTS: In children with cleft, the maxilla and mandible were retrusive; the palatal and mandibular planes were more open, and sagittal maxillomandibular relationship was less favorable in comparison to noncleft control subjects. Soft tissues in patients with cleft reflected retrusive morphology of hard tissues--subnasal and supramental regions were less convex, profile was flatter, and nasolabial angle was more acute relative to those of the control subjects. CONCLUSIONS: Craniofacial morphology after 1-stage repair was deviated in comparison with noncleft control subjects. However, the degree of deviation was comparable with that found after treatment with alternative surgical protocols.
Resumo:
In 1983, M. van den Berg made his Fundamental Gap Conjecture about the difference between the first two Dirichlet eigenvalues (the fundamental gap) of any convex domain in the Euclidean plane. Recently, progress has been made in the case where the domains are polygons and, in particular, triangles. We examine the conjecture for triangles in hyperbolic geometry, though we seek an for an upper bound for the fundamental gap rather than a lower bound.
Resumo:
Introduction: Spinal fusion is a widely and successfully performed strategy for the treatment of spinal deformities and degenerative diseases. The general approach has been to stabilize the spine with implants so that a solid bony fusion between the vertebrae can develop. However, new implant designs have emerged that aim at preservation or restoration of the motion of the spinal segment. In addition to static, load sharing principles, these designs also require a profound knowledge of kinematic and dynamic properties to properly characterise the in vivo performance of the implants. Methods: To address this, an apparatus was developed that enables the intraoperative determination of the load–displacement behavior of spinal motion segments. The apparatus consists of a sensor-equipped distractor to measure the applied force between the transverse processes, and an optoelectronic camera to track the motion of vertebrae and the distractor. In this intraoperative trial, measurements from two patients with adolescent idiopathic scoliosis with right thoracic curves were made at four motion segments each. Results: At a lateral bending moment of 5 N m, the mean flexibility of all eight motion segments was 0.18 ± 0.08°/N m on the convex side and 0.24 ± 0.11°/N m on the concave side. Discussion: The results agree with published data obtained from cadaver studies with and without axial preload. Intraoperatively acquired data with this method may serve as an input for mathematical models and contribute to the development of new implants and treatment strategies.
Resumo:
BACKGROUND: Chronic neck pain after whiplash injury is caused by cervical zygapophysial joints in 50% of patients. Diagnostic blocks of nerves supplying the joints are performed using fluoroscopy. The authors' hypothesis was that the third occipital nerve can be visualized and blocked with use of an ultrasound-guided technique. METHODS: In 14 volunteers, the authors placed a needle ultrasound-guided to the third occipital nerve on both sides of the neck. They punctured caudal and perpendicular to the 14-MHz transducer. In 11 volunteers, 0.9 ml of either local anesthetic or normal saline was applied in a randomized, double-blind, crossover manner. Anesthesia was controlled in the corresponding skin area by pinprick and cold testing. The position of the needle was controlled by fluoroscopy. RESULTS: The third occipital nerve could be visualized in all subjects and showed a median diameter of 2.0 mm. Anesthesia was missing after local anesthetic in only one case. There was neither anesthesia nor hyposensitivity after any of the saline injections. The C2-C3 joint, in a transversal plane visualized as a convex density, was identified correctly by ultrasound in 27 of 28 cases, and 23 needles were placed correctly into the target zone. CONCLUSIONS: The third occipital nerve can be visualized and blocked with use of an ultrasound-guided technique. The needles were positioned accurately in 82% of cases as confirmed by fluoroscopy; the nerve was blocked in 90% of cases. Because ultrasound is the only available technique today to visualize this nerve, it seems to be a promising new method for block guidance instead of fluoroscopy.