970 resultados para Computational methods


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper proposed an automated 3D lumbar intervertebral disc (IVD) segmentation strategy from MRI data. Starting from two user supplied landmarks, the geometrical parameters of all lumbar vertebral bodies and intervertebral discs are automatically extracted from a mid-sagittal slice using a graphical model based approach. After that, a three-dimensional (3D) variable-radius soft tube model of the lumbar spine column is built to guide the 3D disc segmentation. The disc segmentation is achieved as a multi-kernel diffeomorphic registration between a 3D template of the disc and the observed MRI data. Experiments on 15 patient data sets showed the robustness and the accuracy of the proposed algorithm.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we review the hierarchical structure and the resulting elastic properties of mineralized tendons as obtained by various multiscale experimental and computational methods spanning from nano- to macroscale. The mechanical properties of mineralized collagen fibres are important to understand the mechanics of hard tissues constituted by complex arrangements of these fibres, like in human lamellar bone. The uniaxial mineralized collagen fibre array naturally occurring in avian tendons is a well studied model tissue for investigating various stages of tissue mineralization and the corresponding elastic properties. Some avian tendons mineralize with maturation, which results in a graded structure containing two zones of distinct morphology, circumferential and interstitial. These zones exhibit different amounts of mineral, collagen, pores and a different mineral distribution between collagen fibrillar and extrafibrillar space that lead to distinct elastic properties. Mineralized tendon cells have two phenotypes: elongated tenocytes placed between fibres in the circumferential zone and cuboidal cells with lower aspect ratios in the interstitial zone. Interestingly some regions of avian tendons seem to be predestined to mineralization, which is exhibited as specific collagen cross-linking patterns as well as distribution of minor tendon constituents (like proteoglycans) and loss of collagen crimp. Results of investigations in naturally mineralizing avian tendons may be useful in understanding the pathological mineralization occurring in some human tendons.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In any physicochemical process in liquids, the dynamical response of the solvent to the solutes out of equilibrium plays a crucial role in the rates and products: the solvent molecules react to the changes in volume and electron density of the solutes to minimize the free energy of the solution, thus modulating the activation barriers and stabilizing (or destabilizing) intermediate states. In charge transfer (CT) processes in polar solvents, the response of the solvent always assists the formation of charge separation states by stabilizing the energy of the localized charges. A deep understanding of the solvation mechanisms and time scales is therefore essential for a correct description of any photochemical process in dense phase and for designing molecular devices based on photosensitizers with CT excited states. In the last two decades, with the advent of ultrafast time-resolved spectroscopies, microscopic models describing the relevant case of polar solvation (where both the solvent and the solute molecules have a permanent electric dipole and the mutual interaction is mainly dipole−dipole) have dramatically progressed. Regardless of the details of each model, they all assume that the effect of the electrostatic fields of the solvent molecules on the internal electronic dynamics of the solute are perturbative and that the solvent−solute coupling is mainly an electrostatic interaction between the constant permanent dipoles of the solute and the solvent molecules. This well-established picture has proven to quantitatively rationalize spectroscopic effects of environmental and electric dynamics (time-resolved Stokes shifts, inhomogeneous broadening, etc.). However, recent computational and experimental studies, including ours, have shown that further improvement is required. Indeed, in the last years we investigated several molecular complexes exhibiting photoexcited CT states, and we found that the current description of the formation and stabilization of CT states in an important group of molecules such as transition metal complexes is inaccurate. In particular, we proved that the solvent molecules are not just spectators of intramolecular electron density redistribution but significantly modulate it. Our results solicit further development of quantum mechanics computational methods to treat the solute and (at least) the closest solvent molecules including the nonperturbative treatment of the effects of local electrostatics and direct solvent−solute interactions to describe the dynamical changes of the solute excited states during the solvent response.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Introduction Gene expression is an important process whereby the genotype controls an individual cell’s phenotype. However, even genetically identical cells display a variety of phenotypes, which may be attributed to differences in their environment. Yet, even after controlling for these two factors, individual phenotypes still diverge due to noisy gene expression. Synthetic gene expression systems allow investigators to isolate, control, and measure the effects of noise on cell phenotypes. I used mathematical and computational methods to design, study, and predict the behavior of synthetic gene expression systems in S. cerevisiae, which were affected by noise. Methods I created probabilistic biochemical reaction models from known behaviors of the tetR and rtTA genes, gene products, and their gene architectures. I then simplified these models to account for essential behaviors of gene expression systems. Finally, I used these models to predict behaviors of modified gene expression systems, which were experimentally verified. Results Cell growth, which is often ignored when formulating chemical kinetics models, was essential for understanding gene expression behavior. Models incorporating growth effects were used to explain unexpected reductions in gene expression noise, design a set of gene expression systems with “linear” dose-responses, and quantify the speed with which cells explored their fitness landscapes due to noisy gene expression. Conclusions Models incorporating noisy gene expression and cell division were necessary to design, understand, and predict the behaviors of synthetic gene expression systems. The methods and models developed here will allow investigators to more efficiently design new gene expression systems, and infer gene expression properties of TetR based systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The overarching goal of the Pathway Semantics Algorithm (PSA) is to improve the in silico identification of clinically useful hypotheses about molecular patterns in disease progression. By framing biomedical questions within a variety of matrix representations, PSA has the flexibility to analyze combined quantitative and qualitative data over a wide range of stratifications. The resulting hypothetical answers can then move to in vitro and in vivo verification, research assay optimization, clinical validation, and commercialization. Herein PSA is shown to generate novel hypotheses about the significant biological pathways in two disease domains: shock / trauma and hemophilia A, and validated experimentally in the latter. The PSA matrix algebra approach identified differential molecular patterns in biological networks over time and outcome that would not be easily found through direct assays, literature or database searches. In this dissertation, Chapter 1 provides a broad overview of the background and motivation for the study, followed by Chapter 2 with a literature review of relevant computational methods. Chapters 3 and 4 describe PSA for node and edge analysis respectively, and apply the method to disease progression in shock / trauma. Chapter 5 demonstrates the application of PSA to hemophilia A and the validation with experimental results. The work is summarized in Chapter 6, followed by extensive references and an Appendix with additional material.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We examine the potential impact of TTIP through trade-cost reductions, applying a mix of econometric and computational methods to develop estimates of the benefits (and costs) for the EU, United States, and third countries. Econometric results point to an approximate 80% growth in bilateral trade with an ambitious trade agreement. However, at the same time, computable general equilibrium (CGE) estimates highlight distributional impacts across countries and factors not evident from econometrics alone. Translated through our CGE framework, while bilateral trade increases roughly 80%, there is a fall of about 2.5% in trade with the rest of the world in our central case. The estimated gains in annual consumption range between 1% and 2.25% for the United States and EU, respectively. A purely discriminatory agreement would harm most countries outside the agreement, while the direction of third-country effects hinges critically on whether NTB reductions end up being discriminatory or not. Within the United States and EU, while labour gains across skill categories, the impact on farmers is mixed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Determining as accurate as possible spent nuclear fuel isotopic content is gaining importance due to its safety and economic implications. Since nowadays higher burn ups are achievable through increasing initial enrichments, more efficient burn up strategies within the reactor cores and the extension of the irradiation periods, establishing and improving computation methodologies is mandatory in order to carry out reliable criticality and isotopic prediction calculations. Several codes (WIMSD5, SERPENT 1.1.7, SCALE 6.0, MONTEBURNS 2.0 and MCNP-ACAB) and methodologies are tested here and compared to consolidated benchmarks (OECD/NEA pin cell moderated with light water) with the purpose of validating them and reviewing the state of the isotopic prediction capabilities. These preliminary comparisons will suggest what can be generally expected of these codes when applied to real problems. In the present paper, SCALE 6.0 and MONTEBURNS 2.0 are used to model the same reported geometries, material compositions and burn up history of the Spanish Van de llós II reactor cycles 7-11 and to reproduce measured isotopies after irradiation and decay times. We analyze comparisons between measurements and each code results for several grades of geometrical modelization detail, using different libraries and cross-section treatment methodologies. The power and flux normalization method implemented in MONTEBURNS 2.0 is discussed and a new normalization strategy is developed to deal with the selected and similar problems, further options are included to reproduce temperature distributions of the materials within the fuel assemblies and it is introduced a new code to automate series of simulations and manage material information between them. In order to have a realistic confidence level in the prediction of spent fuel isotopic content, we have estimated uncertainties using our MCNP-ACAB system. This depletion code, which combines the neutron transport code MCNP and the inventory code ACAB, propagates the uncertainties in the nuclide inventory assessing the potential impact of uncertainties in the basic nuclear data: cross-section, decay data and fission yields

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The uncertainty propagation in fuel cycle calculations due to Nuclear Data (ND) is a important important issue for : issue for : • Present fuel cycles (e.g. high burnup fuel programme) • New fuel cycles designs (e.g. fast breeder reactors and ADS) Different error propagation techniques can be used: • Sensitivity analysis • Response Response Surface Method Surface Method • Monte Carlo technique Then, p p , , in this paper, it is assessed the imp y pact of ND uncertainties on the decay heat and radiotoxicity in two applications: • Fission Pulse Decay ( y Heat calculation (FPDH) • Conceptual design of European Facility for Industrial Transmutation (EFIT)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

There exists an interest in performing full core pin-by-pin computations for present nuclear reactors. In such type of problems the use of a transport approximation like the diffusion equation requires the introduction of correction parameters. Interface discontinuity factors can improve the diffusion solution to nearly reproduce a transport solution. Nevertheless, calculating accurate pin-by-pin IDF requires the knowledge of the heterogeneous neutron flux distribution, which depends on the boundary conditions of the pin-cell as well as the local variables along the nuclear reactor operation. As a consequence, it is impractical to compute them for each possible configuration. An alternative to generate accurate pin-by-pin interface discontinuity factors is to calculate reference values using zero-net-current boundary conditions and to synthesize afterwards their dependencies on the main neighborhood variables. In such way the factors can be accurately computed during fine-mesh diffusion calculations by correcting the reference values as a function of the actual environment of the pin-cell in the core. In this paper we propose a parameterization of the pin-by-pin interface discontinuity factors allowing the implementation of a cross sections library able to treat the neighborhood effect. First results are presented for typical PWR configurations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Interface discontinuity factors based on the Generalized Equivalence Theory are commonly used in nodal homogenized diffusion calculations so that diffusion average values approximate heterogeneous higher order solutions. In this paper, an additional form of interface correction factors is presented in the frame of the Analytic Coarse Mesh Finite Difference Method (ACMFD), based on a correction of the modal fluxes instead of the physical fluxes. In the ACMFD formulation, implemented in COBAYA3 code, the coupled multigroup diffusion equations inside a homogenized region are reduced to a set of uncoupled modal equations through diagonalization of the multigroup diffusion matrix. Then, physical fluxes are transformed into modal fluxes in the eigenspace of the diffusion matrix. It is possible to introduce interface flux discontinuity jumps as the difference of heterogeneous and homogeneous modal fluxes instead of introducing interface discontinuity factors as the ratio of heterogeneous and homogeneous physical fluxes. The formulation in the modal space has been implemented in COBAYA3 code and assessed by comparison with solutions using classical interface discontinuity factors in the physical space

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Within the framework of the Collaborative Project for a European Sodium Fast Reactor, the reactor physics group at UPM is working on the extension of its in-house multi-scale advanced deterministic code COBAYA3 to Sodium Fast Reactors (SFR). COBAYA3 is a 3D multigroup neutron kinetics diffusion code that can be used either as a pin-by-pin code or as a stand-alone nodal code by using the analytic nodal diffusion solver ANDES. It is coupled with thermalhydraulics codes such as COBRA-TF and FLICA, allowing transient analysis of LWR at both fine-mesh and coarse-mesh scales. In order to enable also 3D pin-by-pin and nodal coupled NK-TH simulations of SFR, different developments are in progress. This paper presents the first steps towards the application of COBAYA3 to this type of reactors. ANDES solver, already extended to triangular-Z geometry, has been applied to fast reactor steady-state calculations. The required cross section libraries were generated with ERANOS code for several configurations. The limitations encountered in the application of the Analytic Coarse Mesh Finite Difference (ACMFD) method –implemented inside ANDES– to fast reactors are presented and the sensitivity of the method when using a high number of energy groups is studied. ANDES performance is assessed by comparison with the results provided by ERANOS, using a mini-core model in 33 energy groups. Furthermore, a benchmark from the NEA for a small 3D FBR in hexagonal-Z geometry and 4 energy groups is also employed to verify the behavior of the code with few energy groups.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The threat of impact or explosive loads is regrettably a scenario to be taken into account in the design of lifeline or critical civilian buildings. These are often made of concrete and not specifically designed for military threats. Numerical simulation of such cases may be undertaken with the aid of state of the art explicit dynamic codes, however several difficult challenges are inherent to such models: the material modeling for the concrete anisotropic failure, consideration of reinforcement bars and important structural details, adequate modeling of pressure waves from explosions in complex geometries, and efficient solution to models of complete buildings which can realistically assess failure modes. In this work we employ LS-Dyna for calculation, with Lagrangian finite elements and explicit time integration. Reinforced concrete may be represented in a fairly accurate fashion with recent models such as CSCM model [1] and segregated rebars constrained within the continuum mesh. However, such models cannot be realistically employed for complete models of large buildings, due to limitations of time and computer resources. The use of structural beam and shell elements for this purpose would be the obvious solution, with much lower computational cost. However, this modeling requires careful calibration in order to reproduce adequately the highly nonlinear response of structural concrete members, including bending with and without compression, cracking or plastic crushing, plastic deformation of reinforcement, erosion of vanished elements etc. The main objective of this work is to provide a strategy for modeling such scenarios based on structural elements, using available material models for structural elements [2] and techniques to include the reinforcement in a realistic way. These models are calibrated against fully three-dimensional models and shown to be accurate enough. At the same time they provide the basis for realistic simulation of impact and explosion on full-scale buildings

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A local proper orthogonal decomposition (POD) plus Galerkin projection method was recently developed to accelerate time dependent numerical solvers of PDEs. This method is based on the combined use of a numerical code (NC) and a Galerkin sys- tem (GS) in a sequence of interspersed time intervals, INC and IGS, respectively. POD is performed on some sets of snapshots calculated by the numerical solver in the INC inter- vals. The governing equations are Galerkin projected onto the most energetic POD modes and the resulting GS is time integrated in the next IGS interval. The major computa- tional e®ort is associated with the snapshots calculation in the ¯rst INC interval, where the POD manifold needs to be completely constructed (it is only updated in subsequent INC intervals, which can thus be quite small). As the POD manifold depends only weakly on the particular values of the parameters of the problem, a suitable library can be con- structed adapting the snapshots calculated in other runs to drastically reduce the size of the ¯rst INC interval and thus the involved computational cost. The strategy is success- fully tested in (i) the one-dimensional complex Ginzburg-Landau equation, including the case in which it exhibits transient chaos, and (ii) the two-dimensional unsteady lid-driven cavity problem

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper describes an interactive set of tools used to determine the safety of tunnels and to provide data for the decision making of its mainteinance. Although, no doubt, there are still several drawbacks in the difficult procedures in use it is clear that the way is promising and future improvements both in experimental and analytical methods will increase our understanding of this matter.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Laser Shock Processing is developing as a key technology for the improvement of surface mechanical and corrosion resistance properties of metals due to its ability to introduce intense compressive residual stresses fields into high elastic limit materials by means of an intense laser driven shock wave generated by laser with intensities exceeding the 109 W/cm2 threshold, pulse energies in the range of 1 Joule and interaction times in the range of several ns. However, because of the relatively difficult-to-describe physics of shock wave formation in plasma following laser-matter interaction in solid state, only limited knowledge is available in the way of full comprehension and predictive assessment of the characteristic physical processes and material transformations with a specific consideration of real material properties. In the present paper, an account of the physical issues dominating the development of LSP processes from a moderately high intensity laser-matter interaction point of view is presented along with the theoretical and computational methods developed by the authors for their predictive assessment and new experimental contrast results obtained at laboratory scale.