923 resultados para mesh-free method
Resumo:
Tissue-engineered blood vessels (TEBV) can serve as vascular grafts and may also play an important role in the development of organs-on-a-chip. Most TEBV construction involves scaffolding with biomaterials such as collagen gel or electrospun fibrous mesh. Hypothesizing that a scaffold-free TEBV may be advantageous, we constructed a tubular structure (1 mm i.d.) from aligned human mesenchymal cell sheets (hMSC) as the wall and human endothelial progenitor cell (hEPC) coating as the lumen. The burst pressure of the scaffold-free TEBV was above 200 mmHg after three weeks of sequential culture in a rotating wall bioreactor and perfusion at 6.8 dynes/cm(2). The interwoven organization of the cell layers and extensive extracellular matrix (ECM) formation of the hMSC-based TEBV resembled that of native blood vessels. The TEBV exhibited flow-mediated vasodilation, vasoconstriction after exposure to 1 μM phenylephrine and released nitric oxide in a manner similar to that of porcine femoral vein. HL-60 cells attached to the TEBV lumen after TNF-α activation to suggest a functional endothelium. This study demonstrates the potential of a hEPC endothelialized hMSC-based TEBV for drug screening.
Resumo:
We present fast functional photoacoustic microscopy (PAM) for three-dimensional high-resolution, high-speed imaging of the mouse brain, complementary to other imaging modalities. We implemented a single-wavelength pulse-width-based method with a one-dimensional imaging rate of 100 kHz to image blood oxygenation with capillary-level resolution. We applied PAM to image the vascular morphology, blood oxygenation, blood flow and oxygen metabolism in both resting and stimulated states in the mouse brain.
Resumo:
Free energy calculations are a computational method for determining thermodynamic quantities, such as free energies of binding, via simulation.
Currently, due to computational and algorithmic limitations, free energy calculations are limited in scope.
In this work, we propose two methods for improving the efficiency of free energy calculations.
First, we expand the state space of alchemical intermediates, and show that this expansion enables us to calculate free energies along lower variance paths.
We use Q-learning, a reinforcement learning technique, to discover and optimize paths at low computational cost.
Second, we reduce the cost of sampling along a given path by using sequential Monte Carlo samplers.
We develop a new free energy estimator, pCrooks (pairwise Crooks), a variant on the Crooks fluctuation theorem (CFT), which enables decomposition of the variance of the free energy estimate for discrete paths, while retaining beneficial characteristics of CFT.
Combining these two advancements, we show that for some test models, optimal expanded-space paths have a nearly 80% reduction in variance relative to the standard path.
Additionally, our free energy estimator converges at a more consistent rate and on average 1.8 times faster when we enable path searching, even when the cost of path discovery and refinement is considered.
Resumo:
PURPOSE: Conventional staging methods are inadequate to identify patients with stage II colon cancer (CC) who are at high risk of recurrence after surgery with curative intent. ColDx is a gene expression, microarray-based assay shown to be independently prognostic for recurrence-free interval (RFI) and overall survival in CC. The objective of this study was to further validate ColDx using formalin-fixed, paraffin-embedded specimens collected as part of the Alliance phase III trial, C9581.
PATIENTS AND METHODS: C9581 evaluated edrecolomab versus observation in patients with stage II CC and reported no survival benefit. Under an initial case-cohort sampling design, a randomly selected subcohort (RS) comprised 514 patients from 901 eligible patients with available tissue. Forty-nine additional patients with recurrence events were included in the analysis. Final analysis comprised 393 patients: 360 RS (58 events) and 33 non-RS events. Risk status was determined for each patient by ColDx. The Self-Prentice method was used to test the association between the resulting ColDx risk score and RFI adjusting for standard prognostic variables.
RESULTS: Fifty-five percent of patients (216 of 393) were classified as high risk. After adjustment for prognostic variables that included mismatch repair (MMR) deficiency, ColDx high-risk patients exhibited significantly worse RFI (multivariable hazard ratio, 2.13; 95% CI, 1.3 to 3.5; P < .01). Age and MMR status were marginally significant. RFI at 5 years for patients classified as high risk was 82% (95% CI, 79% to 85%), compared with 91% (95% CI, 89% to 93%) for patients classified as low risk.
CONCLUSION: ColDx is associated with RFI in the C9581 subsample in the presence of other prognostic factors, including MMR deficiency. ColDx could be incorporated with the traditional clinical markers of risk to refine patient prognosis.
Resumo:
A new variant of the Element-Free Galerkin (EFG) method, that combines the diffraction method, to characterize the crack tip solution, and the Heaviside enrichment function for representing discontinuity due to a crack, has been used to model crack propagation through non-homogenous materials. In the case of interface crack propagation, the kink angle is predicted by applying the maximum tangential principal stress (MTPS) criterion in conjunction with consideration of the energy release rate (ERR). The MTPS criterion is applied to the crack tip stress field described by both the stress intensity factor (SIF) and the T-stress, which are extracted using the interaction integral method. The proposed EFG method has been developed and applied for 2D case studies involving a crack in an orthotropic material, crack along an interface and a crack terminating at a bi-material interface, under mechanical or thermal loading; this is done to demonstrate the advantages and efficiency of the proposed methodology. The computed SIFs, T-stress and the predicted interface crack kink angles are compared with existing results in the literature and are found to be in good agreement. An example of crack growth through a particle-reinforced composite materials, which may involve crack meandering around the particle, is reported.
Resumo:
Reliability has emerged as a critical design constraint especially in memories. Designers are going to great lengths to guarantee fault free operation of the underlying silicon by adopting redundancy-based techniques, which essentially try to detect and correct every single error. However, such techniques come at a cost of large area, power and performance overheads which making many researchers to doubt their efficiency especially for error resilient systems where 100% accuracy is not always required. In this paper, we present an alternative method focusing on the confinement of the resulting output error induced by any reliability issues. By focusing on memory faults, rather than correcting every single error the proposed method exploits the statistical characteristics of any target application and replaces any erroneous data with the best available estimate of that data. To realize the proposed method a RISC processor is augmented with custom instructions and special-purpose functional units. We apply the method on the proposed enhanced processor by studying the statistical characteristics of the various algorithms involved in a popular multimedia application. Our experimental results show that in contrast to state-of-the-art fault tolerance approaches, we are able to reduce runtime and area overhead by 71.3% and 83.3% respectively.
Resumo:
With the objective to improve the reactor physics calculation on a 2D and 3D nuclear reactor via the Diffusion Equation, an adaptive automatic finite element remeshing method, based on the elementary area (2D) or volume (3D) constraints, has been developed. The adaptive remeshing technique, guided by a posteriori error estimator, makes use of two external mesh generator programs: Triangle and TetGen. The use of these free external finite element mesh generators and an adaptive remeshing technique based on the current field continuity show that they are powerful tools to improve the neutron flux distribution calculation and by consequence the power solution of the reactor core even though they have a minor influence on the critical coefficient of the calculated reactor core examples. Two numerical examples are presented: the 2D IAEA reactor core numerical benchmark and the 3D model of the Argonauta research reactor, built in Brasil.
Resumo:
A three-dimensional finite volume, unstructured mesh (FV-UM) method for dynamic fluid–structure interaction (DFSI) is described. Fluid structure interaction, as applied to flexible structures, has wide application in diverse areas such as flutter in aircraft, wind response of buildings, flows in elastic pipes and blood vessels. It involves the coupling of fluid flow and structural mechanics, two fields that are conventionally modelled using two dissimilar methods, thus a single comprehensive computational model of both phenomena is a considerable challenge. Until recently work in this area focused on one phenomenon and represented the behaviour of the other more simply. More recently, strategies for solving the full coupling between the fluid and solid mechanics behaviour have been developed. A key contribution has been made by Farhat et al. [Int. J. Numer. Meth. Fluids 21 (1995) 807] employing FV-UM methods for solving the Euler flow equations and a conventional finite element method for the elastic solid mechanics and the spring based mesh procedure of Batina [AIAA paper 0115, 1989] for mesh movement. In this paper, we describe an approach which broadly exploits the three field strategy described by Farhat for fluid flow, structural dynamics and mesh movement but, in the context of DFSI, contains a number of novel features: a single mesh covering the entire domain, a Navier–Stokes flow, a single FV-UM discretisation approach for both the flow and solid mechanics procedures, an implicit predictor–corrector version of the Newmark algorithm, a single code embedding the whole strategy.
Resumo:
Multilevel algorithms are a successful class of optimisation techniques which address the mesh partitioning problem. They usually combine a graph contraction algorithm together with a local optimisation method which refines the partition at each graph level. In this paper we present an enhancement of the technique which uses imbalance to achieve higher quality partitions. We also present a formulation of the Kernighan-Lin partition optimisation algorithm which incorporates load-balancing. The resulting algorithm is tested against a different but related state-of the-art partitioner and shown to provide improved results.
Resumo:
Multilevel algorithms are a successful class of optimisation techniques which address the mesh partitioning problem for mapping meshes onto parallel computers. They usually combine a graph contraction algorithm together with a local optimisation method which refines the partition at each graph level. To date these algorithms have been used almost exclusively to minimise the cut-edge weight in the graph with the aim of minimising the parallel communication overhead. However it has been shown that for certain classes of problem, the convergence of the underlying solution algorithm is strongly influenced by the shape or aspect ratio of the subdomains. In this paper therefore, we modify the multilevel algorithms in order to optimise a cost function based on aspect ratio. Several variants of the algorithms are tested and shown to provide excellent results.
Resumo:
Multilevel algorithms are a successful class of optimisation techniques which address the mesh partitioning problem for mapping meshes onto parallel computers. They usually combine a graph contraction algorithm together with a local optimisation method which refines the partition at each graph level. To date these algorithms have been used almost exclusively to minimise the cut-edge weight in the graph with the aim of minimising the parallel communication overhead. However it has been shown that for certain classes of problem, the convergence of the underlying solution algorithm is strongly influenced by the shape or aspect ratio of the subdomains. In this paper therefore, we modify the multilevel algorithms in order to optimise a cost function based on aspect ratio. Several variants of the algorithms are tested and shown to provide excellent results.
Resumo:
Multilevel algorithms are a successful class of optimisation techniques which address the mesh partitioning problem. They usually combine a graph contraction algorithm together with a local optimisation method which refines the partition at each graph level. To date these algorithms have been used almost exclusively to minimise the cut-edge weight, however it has been shown that for certain classes of solution algorithm, the convergence of the solver is strongly influenced by the subdomain aspect ratio. In this paper therefore, we modify the multilevel algorithms in order to optimise a cost function based on aspect ratio. Several variants of the algorithms are tested and shown to provide excellent results.
Resumo:
This paper deals with the measure of Aspect Ratio for mesh partitioning and gives hints why, for certain solvers, the Aspect Ratio of partitions plays an important role. We define and rate different kinds of Aspect Ratio, present a new center-based partitioning method which optimizes this measure implicitly and rate several existing partitioning methods and tools under the criterion of Aspect Ratio.
Resumo:
A method is outlined for optimising graph partitions which arise in mapping unstructured mesh calculations to parallel computers. The method employs a relative gain iterative technique to both evenly balance the workload and minimise the number and volume of interprocessor communications. A parallel graph reduction technique is also briefly described and can be used to give a global perspective to the optimisation. The algorithms work efficiently in parallel as well as sequentially and when combined with a fast direct partitioning technique (such as the Greedy algorithm) to give an initial partition, the resulting two-stage process proves itself to be both a powerful and flexible solution to the static graph-partitioning problem. Experiments indicate that the resulting parallel code can provide high quality partitions, independent of the initial partition, within a few seconds. The algorithms can also be used for dynamic load-balancing, reusing existing partitions and in this case the procedures are much faster than static techniques, provide partitions of similar or higher quality and, in comparison, involve the migration of a fraction of the data.
Resumo:
The print substrate influences the print result in dry toner electrophotography, which is a widely used digital printing method. The influence of the substrate can be seen more easily in color printing, as that is a more complex process compared to monochrome printing. However, the print quality is also affected by the print substrate in grayscale printing. It is thus in the interests of both substrate producers and printing equipment manufacturers to understand the substrate properties that influence the quality of printed images in more detail. In dry toner electrophotography, the image is printed by transferring charged toner particles to the print substrate in the toner transfer nip, utilizing an electric field, in addition to the forces linked to the contact between toner particles and substrate in the nip. The toner transfer and the resulting image quality are thus influenced by the surface texture and the electrical and dielectric properties of the print substrate. In the investigation of the electrical and dielectric properties of the papers and the effects of substrate roughness, in addition to commercial papers, controlled sample sets were made on pilot paper machines and coating machines to exclude uncontrolled variables from the experiments. The electrical and dielectric properties of the papers investigated were electrical resistivity and conductivity, charge acceptance, charge decay, and the dielectric permittivity and losses at different frequencies, including the effect of temperature. The objective was to gain an understanding of how the electrical and dielectric properties are affected by normal variables in papermaking, including basis weight, material density, filler content, ion and moisture contents, and coating. In addition, the dependency of substrate resistivity on the electric field applied was investigated. Local discharging did not inhibit transfer with the paper roughness levels that are normal in electrophotographic color printing. The potential decay of paper revealed that the charge decay cannot be accurately described with a single exponential function, since in charge decay there are overlapping mechanisms of conduction and depolarization of paper. The resistivity of the paper depends on the NaCl content and exponentially on moisture content although it is also strongly dependent on the electric field applied. This dependency is influenced by the thickness, density, and filler contents of the paper. Furthermore, the Poole-Frenkel model can be applied to the resistivity of uncoated paper. The real part of the dielectric constant ε’ increases with NaCl content and relative humidity, but when these materials cannot polarize freely, the increase cannot be explained by summing the effects of their dielectric constants. Dependencies between the dielectric constant and dielectric loss factor and NaCl content, temperature, and frequency show that in the presence of a sufficient amount of moisture and NaCl, new structures with a relaxation time of the order of 10-3 s are formed in paper. The ε’ of coated papers is influenced by the addition of pigments and other coating additives with polarizable groups and due to the increase in density. The charging potential decreases and the electrical conductivity, potential decay rate, and dielectric constant of paper increase with increasing temperature. The dependencies are exponential and the temperature dependencies and their activation energies are altered by the ion content. The results have been utilized in manufacturing substrates for electrophotographic color printing.