956 resultados para finite difference methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays the environmental issues and the climatic change play fundamental roles in the design of urban spaces. Our cities are growing in size, many times only following immediate needs without a long-term vision. Consequently, the sustainable development has become not only an ethical but also a strategic need: we can no longer afford an uncontrolled urban expansion. One serious effect of the territory industrialisation process is the increase of urban air and surfaces temperatures compared to the outlying rural surroundings. This difference in temperature is what constitutes an urban heat island (UHI). The purpose of this study is to provide a clarification on the role of urban surfacing materials in the thermal dynamics of an urban space, resulting in useful indications and advices in mitigating UHI. With this aim, 4 coloured concrete bricks were tested, measuring their emissivity and building up their heat release curves using infrared thermography. Two emissivity evaluation procedures were carried out and subsequently put in comparison. Samples performances were assessed, and the influence of the colour on the thermal behaviour was investigated. In addition, some external pavements were analysed. Albedo and emissivity parameters were evaluated in order to understand their thermal behaviour in different conditions. Surfaces temperatures were recorded in a one-day measurements campaign. ENVI-met software was used to simulate how the tested materials would behave in two typical urban scenarios: a urban canyon and a urban heat basin. Improvements they can carry to the urban microclimate were investigated. Emissivities obtained for the bricks ranged between 0.92 and 0.97, suggesting a limited influence of the colour on this parameter. Nonetheless, white concrete brick showed the best thermal performance, whilst the black one the worst; red and yellow ones performed pretty identical intermediate trends. De facto, colours affected the overall thermal behaviour. Emissivity parameter was measured in the outdoor work, getting (as expected) high values for the asphalts. Albedo measurements, conducted with a sunshine pyranometer, proved the improving effect given by the yellow paint in terms of solar reflection, and the bad influence of haze on the measurement accuracy. ENVI-met simulations gave a demonstration on the effectiveness in thermal improving of some tested materials. In particular, results showed good performances for white bricks and granite in the heat basin scenario, and painted concrete and macadam in the urban canyon scenario. These materials can be considered valuable solutions in UHI mitigation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thema dieser Arbeit ist die Entwicklung und Kombination verschiedener numerischer Methoden, sowie deren Anwendung auf Probleme stark korrelierter Elektronensysteme. Solche Materialien zeigen viele interessante physikalische Eigenschaften, wie z.B. Supraleitung und magnetische Ordnung und spielen eine bedeutende Rolle in technischen Anwendungen. Es werden zwei verschiedene Modelle behandelt: das Hubbard-Modell und das Kondo-Gitter-Modell (KLM). In den letzten Jahrzehnten konnten bereits viele Erkenntnisse durch die numerische Lösung dieser Modelle gewonnen werden. Dennoch bleibt der physikalische Ursprung vieler Effekte verborgen. Grund dafür ist die Beschränkung aktueller Methoden auf bestimmte Parameterbereiche. Eine der stärksten Einschränkungen ist das Fehlen effizienter Algorithmen für tiefe Temperaturen.rnrnBasierend auf dem Blankenbecler-Scalapino-Sugar Quanten-Monte-Carlo (BSS-QMC) Algorithmus präsentieren wir eine numerisch exakte Methode, die das Hubbard-Modell und das KLM effizient bei sehr tiefen Temperaturen löst. Diese Methode wird auf den Mott-Übergang im zweidimensionalen Hubbard-Modell angewendet. Im Gegensatz zu früheren Studien können wir einen Mott-Übergang bei endlichen Temperaturen und endlichen Wechselwirkungen klar ausschließen.rnrnAuf der Basis dieses exakten BSS-QMC Algorithmus, haben wir einen Störstellenlöser für die dynamische Molekularfeld Theorie (DMFT) sowie ihre Cluster Erweiterungen (CDMFT) entwickelt. Die DMFT ist die vorherrschende Theorie stark korrelierter Systeme, bei denen übliche Bandstrukturrechnungen versagen. Eine Hauptlimitation ist dabei die Verfügbarkeit effizienter Störstellenlöser für das intrinsische Quantenproblem. Der in dieser Arbeit entwickelte Algorithmus hat das gleiche überlegene Skalierungsverhalten mit der inversen Temperatur wie BSS-QMC. Wir untersuchen den Mott-Übergang im Rahmen der DMFT und analysieren den Einfluss von systematischen Fehlern auf diesen Übergang.rnrnEin weiteres prominentes Thema ist die Vernachlässigung von nicht-lokalen Wechselwirkungen in der DMFT. Hierzu kombinieren wir direkte BSS-QMC Gitterrechnungen mit CDMFT für das halb gefüllte zweidimensionale anisotrope Hubbard Modell, das dotierte Hubbard Modell und das KLM. Die Ergebnisse für die verschiedenen Modelle unterscheiden sich stark: während nicht-lokale Korrelationen eine wichtige Rolle im zweidimensionalen (anisotropen) Modell spielen, ist in der paramagnetischen Phase die Impulsabhängigkeit der Selbstenergie für stark dotierte Systeme und für das KLM deutlich schwächer. Eine bemerkenswerte Erkenntnis ist, dass die Selbstenergie sich durch die nicht-wechselwirkende Dispersion parametrisieren lässt. Die spezielle Struktur der Selbstenergie im Impulsraum kann sehr nützlich für die Klassifizierung von elektronischen Korrelationseffekten sein und öffnet den Weg für die Entwicklung neuer Schemata über die Grenzen der DMFT hinaus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistical models have been recently introduced in computational orthopaedics to investigate the bone mechanical properties across several populations. A fundamental aspect for the construction of statistical models concerns the establishment of accurate anatomical correspondences among the objects of the training dataset. Various methods have been proposed to solve this problem such as mesh morphing or image registration algorithms. The objective of this study is to compare a mesh-based and an image-based statistical appearance model approaches for the creation of nite element(FE) meshes. A computer tomography (CT) dataset of 157 human left femurs was used for the comparison. For each approach, 30 finite element meshes were generated with the models. The quality of the obtained FE meshes was evaluated in terms of volume, size and shape of the elements. Results showed that the quality of the meshes obtained with the image-based approach was higher than the quality of the mesh-based approach. Future studies are required to evaluate the impact of this finding on the final mechanical simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sinotubular junction dilation is one of the most frequent pathologies associated with aortic root incompetence. Hence, we create a finite element model considering the whole root geometry; then, starting from healthy valve models and referring to measures of pathological valves reported in the literature, we reproduce the pathology of the aortic root by imposing appropriate boundary conditions. After evaluating the virtual pathological process, we are able to correlate dimensions of non-functional valves with dimensions of competent valves. Such a relation could be helpful in recreating a competent aortic root and, in particular, it could provide useful information in advance in aortic valve sparing surgery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives To compare the use of pair-wise meta-analysis methods to multiple treatment comparison (MTC) methods for evidence-based health-care evaluation to estimate the effectiveness and cost-effectiveness of alternative health-care interventions based on the available evidence. Methods Pair-wise meta-analysis and more complex evidence syntheses, incorporating an MTC component, are applied to three examples: 1) clinical effectiveness of interventions for preventing strokes in people with atrial fibrillation; 2) clinical and cost-effectiveness of using drug-eluting stents in percutaneous coronary intervention in patients with coronary artery disease; and 3) clinical and cost-effectiveness of using neuraminidase inhibitors in the treatment of influenza. We compare the two synthesis approaches with respect to the assumptions made, empirical estimates produced, and conclusions drawn. Results The difference between point estimates of effectiveness produced by the pair-wise and MTC approaches was generally unpredictable—sometimes agreeing closely whereas in other instances differing considerably. In all three examples, the MTC approach allowed the inclusion of randomized controlled trial evidence ignored in the pair-wise meta-analysis approach. This generally increased the precision of the effectiveness estimates from the MTC model. Conclusions The MTC approach to synthesis allows the evidence base on clinical effectiveness to be treated as a coherent whole, include more data, and sometimes relax the assumptions made in the pair-wise approaches. However, MTC models are necessarily more complex than those developed for pair-wise meta-analysis and thus could be seen as less transparent. Therefore, it is important that model details and the assumptions made are carefully reported alongside the results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: Acupuncture is one of the complementary medicine therapies with the greatest demand in Switzerland and many other countries in the West and in Asia. Over the past decades, the pool of scientific literature in acupuncture has markedly increased. The diagnostic methods upon which acupuncture treatment is based, have only been addressed sporadically in scientific journals. The goal of this study is to assess the use of different diagnostic methods in the acupuncture practices and to investigate similarities and differences in using these diagnostic methods between physician and non-physician acupuncturists. Methods: 44 physician acupuncturists with certificates of competence in acupuncture – traditional chinese medicine (TCM) from ASA (Assoziation Schweizer Ärztegesellschaften für Akupunktur und Chinesische Medizin: the Association of Swiss Medical Societies for Acupuncture and Chinese Medicine) and 33 non-physician acupuncturists listed in the EMR (Erfahrungsmedizinisches Register: a national register, which assigns a quality label for CAM therapists in complementary and alternative medicine) in the cantons Basel-Stadt and Basel-Land were asked to fill out a questionnaire on diagnostic methods. The responder rate was 46.8% (69.7% non-physician acupuncturists and 29, 5% physician acupuncturists). Results: The results show that both physician and non-physician acupuncturists take patients’ medical history (94%), use pulse diagnosis (89%), tongue diagnosis (83%) and palpation of body and ear acupuncture points (81%) as diagnostic methods to guide their acupuncture treatments. Between the two groups, there were significant differences in the diagnostic tools being used. Physician acupuncturists do examine their patients significantly more often with western medical methods (p<.05) than this is the case for nonphysician acupuncturists. Non-physician acupuncturists use pulse diagnosis more often than physicians (p<.05). A highly significant difference was observed in the length of time spent with collecting patients’ medical history, where nonphysician acupuncturists clearly spent more time (p<.001). Conclusion: Depending on the educational background of the acupuncturist, different diagnostic methods are used for making the diagnosis. Especially the more time consuming methods like a comprehensive anamnesis and pulse diagnosis are more frequently employed by non-physician practitioners. Further studies will clarify if these results are valid for Switzerland in general, and to what extent the differing use of diagnostic methods has an impact on the diagnosis itself and on the resulting treatment methods, as well as on the treatment success and the patients’ satisfaction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this in vitro study was to assess the agreement among four techniques used as gold standard for the validation of methods for occlusal caries detection. Sixty-five human permanent molars were selected and one site in each occlusal surface was chosen as the test site. The teeth were cut and prepared according to each technique: stereomicroscopy without coloring (1), dye enhancement with rhodamine B (2) and fuchsine/acetic light green (3), and semi-quantitative microradiography (4). Digital photographs from each prepared tooth were assessed by three examiners for caries extension. Weighted kappa, as well as Friedman's test with multiple comparisons, was performed to compare all techniques and verify statistical significant differences. Results: kappa values varied from 0.62 to 0.78, the latter being found by both dye enhancement methods. Friedman's test showed statistical significant difference (P < 0.001) and multiple comparison identified these differences among all techniques, except between both dye enhancement methods (rhodamine B and fuchsine/acetic light green). Cross-tabulation showed that the stereomicroscopy overscored the lesions. Both dye enhancement methods showed a good agreement, while stereomicroscopy overscored the lesions. Furthermore, the outcome of caries diagnostic tests may be influenced by the validation method applied. Dye enhancement methods seem to be reliable as gold standard methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Clay minerals have a fundamental importance in many processes in soils and sediments such as the bioavailability of nutrients, water retention, the adsorption of common pollutants, and the formation of an impermeable barrier upon swelling. Many of the properties of clay minerals are due to the unique environment present at the clay mineral/water interface. Traditional techniques such as X-ray diffraction (XRD) and absorption isotherms have provided a wealth of information about this interface but have suffered from limitations. The methods and results presented herein are designed to yield new experimental information about the clay mineral/water interface.A new method of studying the swelling dynamics of clay minerals was developed using in situ atomic force microscopy (AFM). The preliminary results presented here demonstrate that this technique allows one to study individual clay mineral unit layers, explore the natural heterogeneities of samples, and monitor swelling dynamics of clay minerals in real time. Cation exchange experiments were conducted monitoring the swelling change of individual nontronite quasi-crystals as the chemical composition of the surrounding environment was manipulated several times. A proof of concept study has shown that the changes in swelling are from the exchange of interlayer cations and not from the mechanical force of replacing the solution in the fluid cell. A series of attenuated total internal reflection Fourier transform infrared spectroscopy (ATR-FTIR) experiments were performed to gain a better understanding of the organization of water within the interlayer region of two Fe-bearing clay minerals. These experiments made use of the Subtractive Kramers-Kronig (SKK) Transform and the calculation of difference spectra to obtain information about interfacial water hidden within the absorption bands of bulk water. The results indicate that the reduction of structural iron disrupts the organization of water around a strongly hydrated cation such as sodium as the cation transitions from an outer-sphere complex with the mineral surface to an inner-sphere complex. In the case of a less strongly hydrated cation such as potassium, reduction of structural iron actually increases the ordering of water molecules at the mineral surface. These effects were only noticed with the reduction of iron in the tetrahedral sheet close to the basal surface where the increased charge density is localized closer to the cations in the interlayer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The analysis of Komendant's design of the Kimbell Art Museum was carried out in order to determine the effectiveness of the ring beams, edge beams and prestressing in the shells of the roof system. Finite element analysis was not available to Komendant or other engineers of the time to aid them in the design and analysis. Thus, the use of this tool helped to form a new perspective on the Kimbell Art Museum and analyze the engineer's work. In order to carry out the finite element analysis of Kimbell Art Museum, ADINA finite element analysis software was utilized. Eight finite element models (FEM-1 through FEM-8) of increasing complexity were created. The results of the most realistic model, FEM-8, which included ring beams, edge beams and prestressing, were compared to Komendant's calculations. The maximum deflection at the crown of the mid-span surface of -0.1739 in. in FEM-8 was found to be larger than Komendant's deflection in the design documents before the loss in prestressing force (-0.152 in.) but smaller than his prediction after the loss in prestressing force (-0.3814 in.). Komendant predicted a larger longitudinal stress of -903 psi at the crown (vs. -797 psi in FEM-8) and 37 psi at the edge (vs. -347 psi in FEM-8). Considering the strength of concrete of 5000 psi, the difference in results is not significant. From the analysis it was determined that both FEM-5, which included prestressing and fixed rings, and FEM-8 can be successfully and effectively implemented in practice. Prestressing was used in both models and thus served as the main contribution to efficiency. FEM-5 showed that ring and edge beams can be avoided, however an architect might find them more aesthetically appropriate than rigid walls.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Digital signal processing (DSP) techniques for biological sequence analysis continue to grow in popularity due to the inherent digital nature of these sequences. DSP methods have demonstrated early success for detection of coding regions in a gene. Recently, these methods are being used to establish DNA gene similarity. We present the inter-coefficient difference (ICD) transformation, a novel extension of the discrete Fourier transformation, which can be applied to any DNA sequence. The ICD method is a mathematical, alignment-free DNA comparison method that generates a genetic signature for any DNA sequence that is used to generate relative measures of similarity among DNA sequences. We demonstrate our method on a set of insulin genes obtained from an evolutionarily wide range of species, and on a set of avian influenza viral sequences, which represents a set of highly similar sequences. We compare phylogenetic trees generated using our technique against trees generated using traditional alignment techniques for similarity and demonstrate that the ICD method produces a highly accurate tree without requiring an alignment prior to establishing sequence similarity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to the inherent limitations of DXA, assessment of the biomechanical properties of vertebral bodies relies increasingly on CT-based finite element (FE) models, but these often use simplistic material behaviour and/or single loading cases. In this study, we applied a novel constitutive law for bone elasticity, plasticity and damage to FE models created from coarsened pQCT images of human vertebrae, and compared vertebral stiffness, strength and damage accumulation for axial compression, anterior flexion and a combination of these two cases. FE axial stiffness and strength correlated with experiments and were linearly related to flexion properties. In all loading modes, damage localised preferentially in the trabecular compartment. Damage for the combined loading was higher than cumulated damage produced by individual compression and flexion. In conclusion, this FE method predicts stiffness and strength of vertebral bodies from CT images with clinical resolution and provides insight into damage accumulation in various loading modes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

STUDY DESIGN: The biomechanics of vertebral bodies augmented with real distributions of cement were investigated using nonlinear finite element (FE) analysis. OBJECTIVES: To compare stiffness, strength, and stress transfer of augmented versus nonaugmented osteoporotic vertebral bodies under compressive loading. Specifically, to examine how cement distribution, volume, and compliance affect these biomechanical variables. SUMMARY OF BACKGROUND DATA: Previous FE studies suggested that vertebroplasty might alter vertebral stress transfer, leading to adjacent vertebral failure. However, no FE study so far accounted for real cement distributions and bone damage accumulation. METHODS: Twelve vertebral bodies scanned with high-resolution pQCT and tested in compression were augmented with various volumes of cements and scanned again. Nonaugmented and augmented pQCT datasets were converted to FE models, with bone properties modeled with an elastic, plastic and damage constitutive law that was previously calibrated for the nonaugmented models. The cement-bone composite was modeled with a rule of mixture. The nonaugmented and augmented FE models were subjected to compression and their stiffness, strength, and stress map calculated for different cement compliances. RESULTS: Cement distribution dominated the stiffening and strengthening effects of augmentation. Models with cement connecting either the superior or inferior endplate (S/I fillings) were only up to 2 times stiffer than the nonaugmented models with minimal strengthening, whereas those with cement connecting both endplates (S + I fillings) were 1 to 8 times stiffer and 1 to 12 times stronger. Stress increases above and below the cement, which was higher for the S + I cases and was significantly reduced by increasing cement compliance. CONCLUSION: The developed FE approach, which accounts for real cement distributions and bone damage accumulation, provides a refined insight into the mechanics of augmented vertebral bodies. In particular, augmentation with compliant cement bridging both endplates would reduce stress transfer while providing sufficient strengthening.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Single-screw extrusion is one of the widely used processing methods in plastics industry, which was the third largest manufacturing industry in the United States in 2007 [5]. In order to optimize the single-screw extrusion process, tremendous efforts have been devoted for development of accurate models in the last fifty years, especially for polymer melting in screw extruders. This has led to a good qualitative understanding of the melting process; however, quantitative predictions of melting from various models often have a large error in comparison to the experimental data. Thus, even nowadays, process parameters and the geometry of the extruder channel for the single-screw extrusion are determined by trial and error. Since new polymers are developed frequently, finding the optimum parameters to extrude these polymers by trial and error is costly and time consuming. In order to reduce the time and experimental work required for optimizing the process parameters and the geometry of the extruder channel for a given polymer, the main goal of this research was to perform a coordinated experimental and numerical investigation of melting in screw extrusion. In this work, a full three-dimensional finite element simulation of the two-phase flow in the melting and metering zones of a single-screw extruder was performed by solving the conservation equations for mass, momentum, and energy. The only attempt for such a three-dimensional simulation of melting in screw extruder was more than twenty years back. However, that work had only a limited success because of the capability of computers and mathematical algorithms available at that time. The dramatic improvement of computational power and mathematical knowledge now make it possible to run full 3-D simulations of two-phase flow in single-screw extruders on a desktop PC. In order to verify the numerical predictions from the full 3-D simulations of two-phase flow in single-screw extruders, a detailed experimental study was performed. This experimental study included Maddock screw-freezing experiments, Screw Simulator experiments and material characterization experiments. Maddock screw-freezing experiments were performed in order to visualize the melting profile along the single-screw extruder channel with different screw geometry configurations. These melting profiles were compared with the simulation results. Screw Simulator experiments were performed to collect the shear stress and melting flux data for various polymers. Cone and plate viscometer experiments were performed to obtain the shear viscosity data which is needed in the simulations. An optimization code was developed to optimize two screw geometry parameters, namely, screw lead (pitch) and depth in the metering section of a single-screw extruder, such that the output rate of the extruder was maximized without exceeding the maximum temperature value specified at the exit of the extruder. This optimization code used a mesh partitioning technique in order to obtain the flow domain. The simulations in this flow domain was performed using the code developed to simulate the two-phase flow in single-screw extruders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An extrusion die is used to continuously produce parts with a constant cross section; such as sheets, pipes, tire components and more complex shapes such as window seals. The die is fed by a screw extruder when polymers are used. The extruder melts, mixes and pressures the material by the rotation of either a single or double screw. The polymer can then be continuously forced through the die producing a long part in the shape of the die outlet. The extruded section is then cut to the desired length. Generally, the primary target of a well designed die is to produce a uniform outlet velocity without excessively raising the pressure required to extrude the polymer through the die. Other properties such as temperature uniformity and residence time are also important but are not directly considered in this work. Designing dies for optimal outlet velocity variation using simple analytical equations are feasible for basic die geometries or simple channels. Due to the complexity of die geometry and of polymer material properties design of complex dies by analytical methods is difficult. For complex dies iterative methods must be used to optimize dies. An automated iterative method is desired for die optimization. To automate the design and optimization of an extrusion die two issues must be dealt with. The first is how to generate a new mesh for each iteration. In this work, this is approached by modifying a Parasolid file that describes a CAD part. This file is then used in a commercial meshing software. Skewing the initial mesh to produce a new geometry was also employed as a second option. The second issue is an optimization problem with the presence of noise stemming from variations in the mesh and cumulative truncation errors. In this work a simplex method and a modified trust region method were employed for automated optimization of die geometries. For the trust region a discreet derivative and a BFGS Hessian approximation were used. To deal with the noise in the function the trust region method was modified to automatically adjust the discreet derivative step size and the trust region based on changes in noise and function contour. Generally uniformity of velocity at exit of the extrusion die can be improved by increasing resistance across the die but this is limited by the pressure capabilities of the extruder. In optimization, a penalty factor that increases exponentially from the pressure limit is applied. This penalty can be applied in two different ways; the first only to the designs which exceed the pressure limit, the second to both designs above and below the pressure limit. Both of these methods were tested and compared in this work.