939 resultados para Two-dimensional cutting problem
Resumo:
This paper reports an experimental method to estimate the convective heat transfer of cutting fluids in a laminar flow regime applied on a thin steel plate. The heat source provided by the metal cutting was simulated by electrical heating of the plate. Three different cooling conditions were evaluated: a dry cooling system, a flooded cooling system and a minimum quantity of lubrication cooling system, as well as two different cutting fluids for the last two systems. The results showed considerable enhancement of convective heat transfer using the flooded system. For the dry and minimum quantity of lubrication systems, the heat conduction inside the body was much faster than the heat convection away from its surface. In addition, using the Biot number, the possible models were analyzed for conduction heat problems for each experimental condition tested.
Resumo:
We present two-dimensional (2D) two-particle angular correlations measured with the STAR detector on relative pseudorapidity eta and azimuth phi for charged particles from Au-Au collisions at root s(NN) = 62 and 200 GeV with transverse momentum p(t) >= 0.15 GeV/c, vertical bar eta vertical bar <= 1, and 2 pi in azimuth. Observed correlations include a same-side (relative azimuth <pi/2) 2D peak, a closely related away-side azimuth dipole, and an azimuth quadrupole conventionally associated with elliptic flow. The same-side 2D peak and away-side dipole are explained by semihard parton scattering and fragmentation (minijets) in proton-proton and peripheral nucleus-nucleus collisions. Those structures follow N-N binary-collision scaling in Au-Au collisions until midcentrality, where a transition to a qualitatively different centrality trend occurs within one 10% centrality bin. Above the transition point the number of same-side and away-side correlated pairs increases rapidly relative to binary-collision scaling, the eta width of the same-side 2D peak also increases rapidly (eta elongation), and the phi width actually decreases significantly. Those centrality trends are in marked contrast with conventional expectations for jet quenching in a dense medium. The observed centrality trends are compared to perturbative QCD predictions computed in HIJING, which serve as a theoretical baseline, and to the expected trends for semihard parton scattering and fragmentation in a thermalized opaque medium predicted by theoretical calculations and phenomenological models. We are unable to reconcile a semihard parton scattering and fragmentation origin for the observed correlation structure and centrality trends with heavy-ion collision scenarios that invoke rapid parton thermalization. If the collision system turns out to be effectively opaque to few-GeV partons the present observations would be inconsistent with the minijet picture discussed here. DOI: 10.1103/PhysRevC.86.064902
Resumo:
Piezoelectric materials can be used to convert oscillatory mechanical energy into electrical energy. Energy harvesting devices are designed to capture the ambient energy surrounding the electronics and convert it into usable electrical energy. The design of energy harvesting devices is not obvious, requiring optimization procedures. This paper investigates the influence of pattern gradation using topology optimization on the design of piezocomposite energy harvesting devices based on bending behavior. The objective function consists of maximizing the electric power generated in a load resistor. A projection scheme is employed to compute the element densities from design variables and control the length scale of the material density. Examples of two-dimensional piezocomposite energy harvesting devices are presented and discussed using the proposed method. The numerical results illustrate that pattern gradation constraints help to increase the electric power generated in a load resistor and guides the problem toward a more stable solution. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
The flow around circular smooth fixed cylinder in a large range of Reynolds numbers is considered in this paper. In order to investigate this canonical case, we perform CFD calculations and apply verification & validation (V&V) procedures to draw conclusions regarding numerical error and, afterwards, assess the modeling errors and capabilities of this (U)RANS method to solve the problem. Eight Reynolds numbers between Re = 10 and Re 5 x 10(5) will be presented with, at least, four geometrically similar grids and five discretization in time for each case (when unsteady), together with strict control of iterative and round-off errors, allowing a consistent verification analysis with uncertainty estimation. Two-dimensional RANS, steady or unsteady, laminar or turbulent calculations are performed. The original 1994 k - omega SST turbulence model by Menter is used to model turbulence. The validation procedure is performed by comparing the numerical results with an extensive set of experimental results compiled from the literature. [DOI: 10.1115/1.4007571]
Resumo:
The main feature of partition of unity methods such as the generalized or extended finite element method is their ability of utilizing a priori knowledge about the solution of a problem in the form of enrichment functions. However, analytical derivation of enrichment functions with good approximation properties is mostly limited to two-dimensional linear problems. This paper presents a procedure to numerically generate proper enrichment functions for three-dimensional problems with confined plasticity where plastic evolution is gradual. This procedure involves the solution of boundary value problems around local regions exhibiting nonlinear behavior and the enrichment of the global solution space with the local solutions through the partition of unity method framework. This approach can produce accurate nonlinear solutions with a reduced computational cost compared to standard finite element methods since computationally intensive nonlinear iterations can be performed on coarse global meshes after the creation of enrichment functions properly describing localized nonlinear behavior. Several three-dimensional nonlinear problems based on the rate-independent J (2) plasticity theory with isotropic hardening are solved using the proposed procedure to demonstrate its robustness, accuracy and computational efficiency.
Resumo:
This paper deals with the numerical analysis of saturated porous media, taking into account the damage phenomena on the solid skeleton. The porous media is taken into poro-elastic framework, in full-saturated condition, based on Biot's Theory. A scalar damage model is assumed for this analysis. An implicit boundary element method (BEM) formulation, based on time-independent fundamental solutions, is developed and implemented to couple the fluid flow and two-dimensional elastostatic problems. The integration over boundary elements is evaluated using a numerical Gauss procedure. A semi-analytical scheme for the case of triangular domain cells is followed to carry out the relevant domain integrals. The non-linear problem is solved by a Newton-Raphson procedure. Numerical examples are presented, in order to validate the implemented formulation and to illustrate its efficacy. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
We report self-similar properties of periodic structures remarkably organized in the two-parameter space for a two-gene system, described by two-dimensional symmetric map. The map consists of difference equations derived from the chemical reactions for gene expression and regulation. We characterize the system by using Lyapunov exponents and isoperiodic diagrams identifying periodic windows, denominated Arnold tongues and shrimp-shaped structures. Period-adding sequences are observed for both periodic windows. We also identify Fibonacci-type series and Golden ratio for Arnold tongues, and period multiple-of-three windows for shrimps. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
The importance of mechanical aspects related to cell activity and its environment is becoming more evident due to their influence in stem cell differentiation and in the development of diseases such as atherosclerosis. The mechanical tension homeostasis is related to normal tissue behavior and its lack may be related to the formation of cancer, which shows a higher mechanical tension. Due to the complexity of cellular activity, the application of simplified models may elucidate which factors are really essential and which have a marginal effect. The development of a systematic method to reconstruct the elements involved in the perception of mechanical aspects by the cell may accelerate substantially the validation of these models. This work proposes the development of a routine capable of reconstructing the topology of focal adhesions and the actomyosin portion of the cytoskeleton from the displacement field generated by the cell on a flexible substrate. Another way to think of this problem is to develop an algorithm to reconstruct the forces applied by the cell from the measurements of the substrate displacement, which would be characterized as an inverse problem. For these kind of problems, the Topology Optimization Method (TOM) is suitable to find a solution. TOM is consisted of an iterative application of an optimization method and an analysis method to obtain an optimal distribution of material in a fixed domain. One way to experimentally obtain the substrate displacement is through Traction Force Microscopy (TFM), which also provides the forces applied by the cell. Along with systematically generating the distributions of focal adhesion and actin-myosin for the validation of simplified models, the algorithm also represents a complementary and more phenomenological approach to TFM. As a first approximation, actin fibers and flexible substrate are represented through two-dimensional linear Finite Element Method. Actin contraction is modeled as an initial stress of the FEM elements. Focal adhesions connecting actin and substrate are represented by springs. The algorithm was applied to data obtained from experiments regarding cytoskeletal prestress and micropatterning, comparing the numerical results to the experimental ones
Resumo:
[EN] In this paper we present a method for the regularization of 3D cylindrical surfaces. By a cylindrical surface we mean a 3D surface that can be expressed as an application S(l; µ) ! R3 , where (l; µ) represents a cylindrical parametrization of the 3D surface. We built an initial cylindrical parametrization of the surface. We propose a new method to regularize such cylindrical surface. This method takes into account the information supplied by the disparity maps computed between pair of images to constraint the regularization of the set of 3D points. We propose a model based on an energy which is composed of two terms: an attachment term that minimizes the difference between the image coordinates and the disparity maps and a second term that enables a regularization by means of anisotropic diffusion. One interesting advantage of this approach is that we regularize the 3D surface by using a bi-dimensional minimization problem.
Resumo:
Slope failure occurs in many areas throughout the world and it becomes an important problem when it interferes with human activity, in which disasters provoke loss of life and property damage. In this research we investigate the slope failure through the centrifuge modeling, where a reduced-scale model, N times smaller than the full-scale (prototype), is used whereas the acceleration is increased by N times (compared with the gravity acceleration) to preserve the stress and the strain behavior. The aims of this research “Centrifuge modeling of sandy slopes” are in extreme synthesis: 1) test the reliability of the centrifuge modeling as a tool to investigate the behavior of a sandy slope failure; 2) understand how the failure mechanism is affected by changing the slope angle and obtain useful information for the design. In order to achieve this scope we arranged the work as follows: Chapter one: centrifuge modeling of slope failure. In this chapter we provide a general view about the context in which we are working on. Basically we explain what is a slope failure, how it happens and which are the tools available to investigate this phenomenon. Afterwards we introduce the technology used to study this topic, that is the geotechnical centrifuge. Chapter two: testing apparatus. In the first section of this chapter we describe all the procedures and facilities used to perform a test in the centrifuge. Then we explain the characteristics of the soil (Nevada sand), like the dry unit weight, water content, relative density, and its strength parameters (c,φ), which have been calculated in laboratory through the triaxial test. Chapter three: centrifuge tests. In this part of the document are presented all the results from the tests done in centrifuge. When we talk about results we refer to the acceleration at failure for each model tested and its failure surface. In our case study we tested models with the same soil and geometric characteristics but different angles. The angles tested in this research were: 60°, 75° and 90°. Chapter four: slope stability analysis. We introduce the features and the concept of the software: ReSSA (2.0). This software allows us to calculate the theoretical failure surfaces of the prototypes. Then we show in this section the comparisons between the experimental failure surfaces of the prototype, traced in the laboratory, and the one calculated by the software. Chapter five: conclusion. The conclusion of the research presents the results obtained in relation to the two main aims, mentioned above.
Resumo:
[EN]This work presents a novel approach to solve a two dimensional problem by using an adaptive finite element approach. The most common strategy to deal with nested adaptivity is to generate a mesh that represents the geometry and the input parameters correctly, and to refine this mesh locally to obtain the most accurate solution. As opposed to this approach, the authors propose a technique using independent meshes : geometry, input data and the unknowns. Each particular mesh is obtained by a local nested refinement of the same coarse mesh at the parametric space…
Resumo:
Die medizinische Forschung hat gezeigt, daß die frühzeitigeErkennung des Brustkrebses die Heilungschancen entscheidendverbessert. Die vorliegende Arbeit befaßt sich mit demVersuch, dieses Vorhaben durch die elektrischeImpedanztomographie zu realisieren. Dazu werden an der BrustElektroden angebracht, über die Strom in die Brust fließt.Zu verschiedenen Stromkonfigurationen werden die jeweilsresultierenden Potentialverteilungen gemessen, wasRückschlüsse auf die Leitfähigkeitsverteilung zuläßt. Tumorelassen sich somit aufgrund ihrer im Vergleich zu normalemGewebe anderen elektrischen Eigenschaften lokalisieren. Diese Dissertation beschreibt verschiedene in der Praxisdenkbare Vorgehensweisen. Das Problem wird in zwei und indrei Dimensionen mit ein oder mehreren Stromkonfigurationen untersucht. Im zweidimensionalen Fall werden Schnittbildervon Objekten in einem Versuchstank mit realistischenMeßdaten berechnet. Alle Untersuchungen in drei Dimensionenbeziehen sich auf künstlich generierte Daten. Ein weiterer Schwerpunkt der Arbeit liegt in der Bestimmungdes Stromflusses durch Elektroden in drei Dimensionen. DieKenntnis dieser Stromdichten ist für eine präzise Behandlungdes Vorwärtsproblems von großer Bedeutung.
Resumo:
This thesis is focused on the development of heteronuclear correlation methods in solid-state NMR spectroscopy, where the spatial dependence of the dipolar coupling is exploited to obtain structural and dynamical information in solids. Quantitative results on dipolar coupling constants are extracted by means of spinning sideband analysis in the indirect dimension of the two-dimensional experiments. The principles of sideband analysis were established and are currently widely used in the group of Prof. Spiess for the special case of homonuclear 1H double-quantum spectroscopy. The generalization of these principles to the heteronuclear case is presented, with special emphasis on naturally abundant 13C-1H systems. For proton spectroscopy in the solid state, line-narrowing is of particular importance, and is here achieved by very-fast sample rotation at the magic angle (MAS), with frequencies up to 35 kHz. Therefore, the heteronuclear dipolar couplings are suppressed and have to be recoupled in order to achieve an efficient excitation of the observed multiple-quantum modes. Heteronuclear recoupling is most straightforwardly accomplished by performing the known REDOR experiment, where pi-pulses are applied every half rotor period. This experiment was modified by the insertion of an additional spectroscopic dimension, such that heteronuclear multiple-quantum experiments can be carried out, which, as shown experimentally and theoretically, closely resemble homonuclear double-quantum experiments. Variants are presented which are well-suited for the recording of high-resolution 13C-1H shift correlation and spinning-sideband spectra, by means of which spatial proximities and quantitative dipolar coupling constants, respectively, of heteronuclear spin pairs can be determined. Spectral editing of 13C spectra is shown to be feasible with these techniques. Moreover, order phenomena and dynamics in columnar mesophases with 13C in natural abundance were investigated. Two further modifications of the REDOR concept allow the correlation of 13C with quadrupolar nuclei, such as 2H. The spectroscopic handling of these nuclei is challenging in that they cover large frequency ranges, and with the new experiments it is shown how the excitation problem can be tackled or circumvented altogether, respectively. As an example, one of the techniques is used for the identification of a yet unknown motional process of the H-bonded protons in the crystalline parts of poly(vinyl alcohol).
Resumo:
The subject of this thesis is in the area of Applied Mathematics known as Inverse Problems. Inverse problems are those where a set of measured data is analysed in order to get as much information as possible on a model which is assumed to represent a system in the real world. We study two inverse problems in the fields of classical and quantum physics: QCD condensates from tau-decay data and the inverse conductivity problem. Despite a concentrated effort by physicists extending over many years, an understanding of QCD from first principles continues to be elusive. Fortunately, data continues to appear which provide a rather direct probe of the inner workings of the strong interactions. We use a functional method which allows us to extract within rather general assumptions phenomenological parameters of QCD (the condensates) from a comparison of the time-like experimental data with asymptotic space-like results from theory. The price to be paid for the generality of assumptions is relatively large errors in the values of the extracted parameters. Although we do not claim that our method is superior to other approaches, we hope that our results lend additional confidence to the numerical results obtained with the help of methods based on QCD sum rules. EIT is a technology developed to image the electrical conductivity distribution of a conductive medium. The technique works by performing simultaneous measurements of direct or alternating electric currents and voltages on the boundary of an object. These are the data used by an image reconstruction algorithm to determine the electrical conductivity distribution within the object. In this thesis, two approaches of EIT image reconstruction are proposed. The first is based on reformulating the inverse problem in terms of integral equations. This method uses only a single set of measurements for the reconstruction. The second approach is an algorithm based on linearisation which uses more then one set of measurements. A promising result is that one can qualitatively reconstruct the conductivity inside the cross-section of a human chest. Even though the human volunteer is neither two-dimensional nor circular, such reconstructions can be useful in medical applications: monitoring for lung problems such as accumulating fluid or a collapsed lung and noninvasive monitoring of heart function and blood flow.
Resumo:
Biologische Membranen sind Fettmolekül-Doppelschichten, die sich wie zweidimensionale Flüssigkeiten verhalten. Die Energie einer solchen fluiden Oberfläche kann häufig mit Hilfe eines Hamiltonians beschrieben werden, der invariant unter Reparametrisierungen der Oberfläche ist und nur von ihrer Geometrie abhängt. Beiträge innerer Freiheitsgrade und der Umgebung können in den Formalismus mit einbezogen werden. Dieser Ansatz wird in der vorliegenden Arbeit dazu verwendet, die Mechanik fluider Membranen und ähnlicher Oberflächen zu untersuchen. Spannungen und Drehmomente in der Oberfläche lassen sich durch kovariante Tensoren ausdrücken. Diese können dann z. B. dazu verwendet werden, die Gleichgewichtsposition der Kontaktlinie zu bestimmen, an der sich zwei aneinander haftende Oberflächen voneinander trennen. Mit Ausnahme von Kapillarphänomenen ist die Oberflächenenergie nicht nur abhängig von Translationen der Kontaktlinie, sondern auch von Änderungen in der Steigung oder sogar Krümmung. Die sich ergebenden Randbedingungen entsprechen den Gleichgewichtsbedingungen an Kräfte und Drehmomente, falls sich die Kontaktlinie frei bewegen kann. Wenn eine der Oberflächen starr ist, muss die Variation lokal dieser Fläche folgen. Spannungen und Drehmomente tragen dann zu einer einzigen Gleichgewichtsbedingung bei; ihre Beiträge können nicht mehr einzeln identifiziert werden. Um quantitative Aussagen über das Verhalten einer fluiden Oberfläche zu machen, müssen ihre elastischen Eigenschaften bekannt sein. Der "Nanotrommel"-Versuchsaufbau ermöglicht es, Membraneigenschaften lokal zu untersuchen: Er besteht aus einer porenüberspannenden Membran, die während des Experiments durch die Spitze eines Rasterkraftmikroskops in die Pore gedrückt wird. Der lineare Verlauf der resultierenden Kraft-Abstands-Kurven kann mit Hilfe der in dieser Arbeit entwickelten Theorie reproduziert werden, wenn der Einfluss von Adhäsion zwischen Spitze und Membran vernachlässigt wird. Bezieht man diesen Effekt in die Rechnungen mit ein, ändert sich das Resultat erheblich: Kraft-Abstands-Kurven sind nicht länger linear, Hysterese und nichtverschwindende Trennkräfte treten auf. Die Voraussagen der Rechnungen könnten in zukünftigen Experimenten dazu verwendet werden, Parameter wie die Biegesteifigkeit der Membran mit einer Auflösung im Nanometerbereich zu bestimmen. Wenn die Materialeigenschaften bekannt sind, können Probleme der Membranmechanik genauer betrachtet werden. Oberflächenvermittelte Wechselwirkungen sind in diesem Zusammenhang ein interessantes Beispiel. Mit Hilfe des oben erwähnten Spannungstensors können analytische Ausdrücke für die krümmungsvermittelte Kraft zwischen zwei Teilchen, die z. B. Proteine repräsentieren, hergeleitet werden. Zusätzlich wird das Gleichgewicht der Kräfte und Drehmomente genutzt, um mehrere Bedingungen an die Geometrie der Membran abzuleiten. Für den Fall zweier unendlich langer Zylinder auf der Membran werden diese Bedingungen zusammen mit Profilberechnungen kombiniert, um quantitative Aussagen über die Wechselwirkung zu treffen. Theorie und Experiment stoßen an ihre Grenzen, wenn es darum geht, die Relevanz von krümmungsvermittelten Wechselwirkungen in der biologischen Zelle korrekt zu beurteilen. In einem solchen Fall bieten Computersimulationen einen alternativen Ansatz: Die hier präsentierten Simulationen sagen voraus, dass Proteine zusammenfinden und Membranbläschen (Vesikel) bilden können, sobald jedes der Proteine eine Mindestkrümmung in der Membran induziert. Der Radius der Vesikel hängt dabei stark von der lokal aufgeprägten Krümmung ab. Das Resultat der Simulationen wird in dieser Arbeit durch ein approximatives theoretisches Modell qualitativ bestätigt.