11 resultados para Halley’s and Euler-Chebyshev’s Methods
em Duke University
Resumo:
Predicting from first-principles calculations whether mixed metallic elements phase-separate or form ordered structures is a major challenge of current materials research. It can be partially addressed in cases where experiments suggest the underlying lattice is conserved, using cluster expansion (CE) and a variety of exhaustive evaluation or genetic search algorithms. Evolutionary algorithms have been recently introduced to search for stable off-lattice structures at fixed mixture compositions. The general off-lattice problem is still unsolved. We present an integrated approach of CE and high-throughput ab initio calculations (HT) applicable to the full range of compositions in binary systems where the constituent elements or the intermediate ordered structures have different lattice types. The HT method replaces the search algorithms by direct calculation of a moderate number of naturally occurring prototypes representing all crystal systems and guides CE calculations of derivative structures. This synergy achieves the precision of the CE and the guiding strengths of the HT. Its application to poorly characterized binary Hf systems, believed to be phase-separating, defines three classes of alloys where CE and HT complement each other to uncover new ordered structures.
Resumo:
The book also covers the Second Variation, Euler-Lagrange PDE systems, and higher-order conservation laws.
Resumo:
The focus of this work is to develop and employ numerical methods that provide characterization of granular microstructures, dynamic fragmentation of brittle materials, and dynamic fracture of three-dimensional bodies.
We first propose the fabric tensor formalism to describe the structure and evolution of lithium-ion electrode microstructure during the calendaring process. Fabric tensors are directional measures of particulate assemblies based on inter-particle connectivity, relating to the structural and transport properties of the electrode. Applying this technique to X-ray computed tomography of cathode microstructure, we show that fabric tensors capture the evolution of the inter-particle contact distribution and are therefore good measures for the internal state of and electronic transport within the electrode.
We then shift focus to the development and analysis of fracture models within finite element simulations. A difficult problem to characterize in the realm of fracture modeling is that of fragmentation, wherein brittle materials subjected to a uniform tensile loading break apart into a large number of smaller pieces. We explore the effect of numerical precision in the results of dynamic fragmentation simulations using the cohesive element approach on a one-dimensional domain. By introducing random and non-random field variations, we discern that round-off error plays a significant role in establishing a mesh-convergent solution for uniform fragmentation problems. Further, by using differing magnitudes of randomized material properties and mesh discretizations, we find that employing randomness can improve convergence behavior and provide a computational savings.
The Thick Level-Set model is implemented to describe brittle media undergoing dynamic fragmentation as an alternative to the cohesive element approach. This non-local damage model features a level-set function that defines the extent and severity of degradation and uses a length scale to limit the damage gradient. In terms of energy dissipated by fracture and mean fragment size, we find that the proposed model reproduces the rate-dependent observations of analytical approaches, cohesive element simulations, and experimental studies.
Lastly, the Thick Level-Set model is implemented in three dimensions to describe the dynamic failure of brittle media, such as the active material particles in the battery cathode during manufacturing. The proposed model matches expected behavior from physical experiments, analytical approaches, and numerical models, and mesh convergence is established. We find that the use of an asymmetrical damage model to represent tensile damage is important to producing the expected results for brittle fracture problems.
The impact of this work is that designers of lithium-ion battery components can employ the numerical methods presented herein to analyze the evolving electrode microstructure during manufacturing, operational, and extraordinary loadings. This allows for enhanced designs and manufacturing methods that advance the state of battery technology. Further, these numerical tools have applicability in a broad range of fields, from geotechnical analysis to ice-sheet modeling to armor design to hydraulic fracturing.
A New Method for Modeling Free Surface Flows and Fluid-structure Interaction with Ocean Applications
Resumo:
The computational modeling of ocean waves and ocean-faring devices poses numerous challenges. Among these are the need to stably and accurately represent both the fluid-fluid interface between water and air as well as the fluid-structure interfaces arising between solid devices and one or more fluids. As techniques are developed to stably and accurately balance the interactions between fluid and structural solvers at these boundaries, a similarly pressing challenge is the development of algorithms that are massively scalable and capable of performing large-scale three-dimensional simulations on reasonable time scales. This dissertation introduces two separate methods for approaching this problem, with the first focusing on the development of sophisticated fluid-fluid interface representations and the second focusing primarily on scalability and extensibility to higher-order methods.
We begin by introducing the narrow-band gradient-augmented level set method (GALSM) for incompressible multiphase Navier-Stokes flow. This is the first use of the high-order GALSM for a fluid flow application, and its reliability and accuracy in modeling ocean environments is tested extensively. The method demonstrates numerous advantages over the traditional level set method, among these a heightened conservation of fluid volume and the representation of subgrid structures.
Next, we present a finite-volume algorithm for solving the incompressible Euler equations in two and three dimensions in the presence of a flow-driven free surface and a dynamic rigid body. In this development, the chief concerns are efficiency, scalability, and extensibility (to higher-order and truly conservative methods). These priorities informed a number of important choices: The air phase is substituted by a pressure boundary condition in order to greatly reduce the size of the computational domain, a cut-cell finite-volume approach is chosen in order to minimize fluid volume loss and open the door to higher-order methods, and adaptive mesh refinement (AMR) is employed to focus computational effort and make large-scale 3D simulations possible. This algorithm is shown to produce robust and accurate results that are well-suited for the study of ocean waves and the development of wave energy conversion (WEC) devices.
Resumo:
This paper describes a methodology for detecting anomalies from sequentially observed and potentially noisy data. The proposed approach consists of two main elements: 1) filtering, or assigning a belief or likelihood to each successive measurement based upon our ability to predict it from previous noisy observations and 2) hedging, or flagging potential anomalies by comparing the current belief against a time-varying and data-adaptive threshold. The threshold is adjusted based on the available feedback from an end user. Our algorithms, which combine universal prediction with recent work on online convex programming, do not require computing posterior distributions given all current observations and involve simple primal-dual parameter updates. At the heart of the proposed approach lie exponential-family models which can be used in a wide variety of contexts and applications, and which yield methods that achieve sublinear per-round regret against both static and slowly varying product distributions with marginals drawn from the same exponential family. Moreover, the regret against static distributions coincides with the minimax value of the corresponding online strongly convex game. We also prove bounds on the number of mistakes made during the hedging step relative to the best offline choice of the threshold with access to all estimated beliefs and feedback signals. We validate the theory on synthetic data drawn from a time-varying distribution over binary vectors of high dimensionality, as well as on the Enron email dataset. © 1963-2012 IEEE.
Resumo:
New applications of genetic data to questions of historical biogeography have revolutionized our understanding of how organisms have come to occupy their present distributions. Phylogenetic methods in combination with divergence time estimation can reveal biogeographical centres of origin, differentiate between hypotheses of vicariance and dispersal, and reveal the directionality of dispersal events. Despite their power, however, phylogenetic methods can sometimes yield patterns that are compatible with multiple, equally well-supported biogeographical hypotheses. In such cases, additional approaches must be integrated to differentiate among conflicting dispersal hypotheses. Here, we use a synthetic approach that draws upon the analytical strengths of coalescent and population genetic methods to augment phylogenetic analyses in order to assess the biogeographical history of Madagascar's Triaenops bats (Chiroptera: Hipposideridae). Phylogenetic analyses of mitochondrial DNA sequence data for Malagasy and east African Triaenops reveal a pattern that equally supports two competing hypotheses. While the phylogeny cannot determine whether Africa or Madagascar was the centre of origin for the species investigated, it serves as the essential backbone for the application of coalescent and population genetic methods. From the application of these methods, we conclude that a hypothesis of two independent but unidirectional dispersal events from Africa to Madagascar is best supported by the data.
Resumo:
BACKGROUND: Few educational resources have been developed to inform patients' renal replacement therapy (RRT) selection decisions. Patients progressing toward end stage renal disease (ESRD) must decide among multiple treatment options with varying characteristics. Complex information about treatments must be adequately conveyed to patients with different educational backgrounds and informational needs. Decisions about treatment options also require family input, as families often participate in patients' treatment and support patients' decisions. We describe the development, design, and preliminary evaluation of an informational, evidence-based, and patient-and family-centered decision aid for patients with ESRD and varying levels of health literacy, health numeracy, and cognitive function. METHODS: We designed a decision aid comprising a complementary video and informational handbook. We based our development process on data previously obtained from qualitative focus groups and systematic literature reviews. We simultaneously developed the video and handbook in "stages." For the video, stages included (1) directed interviews with culturally appropriate patients and families and preliminary script development, (2) video production, and (3) screening the video with patients and their families. For the handbook, stages comprised (1) preliminary content design, (2) a mixed-methods pilot study among diverse patients to assess comprehension of handbook material, and (3) screening the handbook with patients and their families. RESULTS: The video and handbook both addressed potential benefits and trade-offs of treatment selections. The 50-minute video consisted of demographically diverse patients and their families describing their positive and negative experiences with selecting a treatment option. The video also incorporated health professionals' testimonials regarding various considerations that might influence patients' and families' treatment selections. The handbook was comprised of written words, pictures of patients and health care providers, and diagrams describing the findings and quality of scientific studies comparing treatments. The handbook text was written at a 4th to 6th grade reading level. Pilot study results demonstrated that a majority of patients could understand information presented in the handbook. Patient and families screening the nearly completed video and handbook reviewed the materials favorably. CONCLUSIONS: This rigorously designed decision aid may help patients and families make informed decisions about their treatment options for RRT that are well aligned with their values.
Resumo:
The ability to manipulate the coordination chemistry of metal ions has significant ramifications for the study and treatment of metal-related health concerns, including iron overload, UV skin damage, and microbial infection among many other conditions. To address this concern, chelating agents that change their metal binding characteristics in response to external stimuli have been synthesized and characterized by several spectroscopic and chromatographic analytical methods. The primary stimuli of interest for this work are light and hydrogen peroxide.
Herein we report the previously unrecognized photochemistry of aroylhydrazone metal chelator ((E)-N′-[1-(2-hydroxyphenyl)ethyliden]isonicotinoylhydrazide) (HAPI) and its relation to HAPI metal binding properties. Based on promising initial results, a series of HAPI analogues was prepared to probe the structure-function relationships of aroylhydrazone photochemistry. These efforts elucidate the tunable nature of several aroylhydrazone photoswitching properties.
Ongoing efforts in this laboratory seek to develop compounds called prochelators that exhibit a switch from low to high metal binding affinity upon activation by a stimulus of interest. In this context, we present new strategies to install multiple desired functions into a single structure. The prochelator 2-((E)-1-(2-isonicotinoylhydrazono)ethyl)phenyl (E)-3-(2,4-dihydroxyphenyl)acrylate (PC-HAPI) is masked with a photolabile trans-cinnamic acid protecting group that releases umbelliferone, a UV-absorbing, antioxidant coumarin along with a chelating agent upon UV irradiation. In addition to the antioxidant effects of the coumarin, the released chelator (HAPI) inhibits metal-catalyzed production of damaging reactive oxygen species. Finally a peroxide-sensitive prochelator quinolin-8-yl (Z)-3-(4-hydroxy-2-((4-(4,4,5,5-tetramethyl-1,3,2-dioxaborolan-2-yl)benzyl)oxy)phenyl)acrylate (BCQ) has been prepared using a novel synthetic route for functionalized cis-cinnamate esters. BCQ uses a novel masking strategy to trigger a 90-fold increase in fluorescence emission, along with the release of a desired chelator, in the presence of hydrogen peroxide.
Resumo:
"Facts and Fictions: Feminist Literary Criticism and Cultural Critique, 1968-2012" is a critical history of the unfolding of feminist literary study in the US academy. It contributes to current scholarly efforts to revisit the 1970s by reconsidering often-repeated narratives about the critical naivety of feminist literary criticism in its initial articulation. As the story now goes, many of the most prominent feminist thinkers of the period engaged in unsophisticated literary analysis by conflating lived social reality with textual representation when they read works of literature as documentary evidence of real life. As a result, the work of these "bad critics," particularly Kate Millett and Andrea Dworkin, has not been fully accounted for in literary critical terms.
This dissertation returns to Dworkin and Millett's work to argue for a different history of feminist literary criticism. Rather than dismiss their work for its conflation of fact and fiction, I pay attention to the complexity at the heart of it, yielding a new perspective on the history and persistence of the struggle to use literary texts for feminist political ends. Dworkin and Millett established the centrality of reality and representation to the feminist canon debates of "the long 1970s," the sex wars of the 1980s, and the more recent feminist turn to memoir. I read these productive periods in feminist literary criticism from 1968 to 2012 through their varied commitment to literary works.
Chapter One begins with Millett, who de-aestheticized male-authored texts to treat patriarchal literature in relation to culture and ideology. Her mode of literary interpretation was so far afield from the established methods of New Criticism that she was not understood as a literary critic. She was repudiated in the feminist literary criticism that followed her and sought sympathetic methods for reading women's writing. In that decade, the subject of Chapter Two, feminist literary critics began to judge texts on the basis of their ability to accurately depict the reality of women's experiences.
Their vision of the relationship between life and fiction shaped arguments about pornography during the sex wars of the 1980s, the subject of Chapter Three. In this context, Dworkin was feminism's "bad critic." I focus on the literary critical elements of Dworkin's theories of pornographic representation and align her with Millett as a miscategorized literary critic. In the decades following the sex wars, many of the key feminist literary critics of the founding generation (including Dworkin, Jane Gallop, Carolyn Heilbrun, and Millett) wrote memoirs that recounted, largely in experiential terms, the history this dissertation examines. Chapter Four considers the story these memoirists told about the rise and fall of feminist literary criticism. I close with an epilogue on the place of literature in a feminist critical enterprise that has shifted toward privileging theory.
Resumo:
BACKGROUND: Scientists rarely reuse expert knowledge of phylogeny, in spite of years of effort to assemble a great "Tree of Life" (ToL). A notable exception involves the use of Phylomatic, which provides tools to generate custom phylogenies from a large, pre-computed, expert phylogeny of plant taxa. This suggests great potential for a more generalized system that, starting with a query consisting of a list of any known species, would rectify non-standard names, identify expert phylogenies containing the implicated taxa, prune away unneeded parts, and supply branch lengths and annotations, resulting in a custom phylogeny suited to the user's needs. Such a system could become a sustainable community resource if implemented as a distributed system of loosely coupled parts that interact through clearly defined interfaces. RESULTS: With the aim of building such a "phylotastic" system, the NESCent Hackathons, Interoperability, Phylogenies (HIP) working group recruited 2 dozen scientist-programmers to a weeklong programming hackathon in June 2012. During the hackathon (and a three-month follow-up period), 5 teams produced designs, implementations, documentation, presentations, and tests including: (1) a generalized scheme for integrating components; (2) proof-of-concept pruners and controllers; (3) a meta-API for taxonomic name resolution services; (4) a system for storing, finding, and retrieving phylogenies using semantic web technologies for data exchange, storage, and querying; (5) an innovative new service, DateLife.org, which synthesizes pre-computed, time-calibrated phylogenies to assign ages to nodes; and (6) demonstration projects. These outcomes are accessible via a public code repository (GitHub.com), a website (http://www.phylotastic.org), and a server image. CONCLUSIONS: Approximately 9 person-months of effort (centered on a software development hackathon) resulted in the design and implementation of proof-of-concept software for 4 core phylotastic components, 3 controllers, and 3 end-user demonstration tools. While these products have substantial limitations, they suggest considerable potential for a distributed system that makes phylogenetic knowledge readily accessible in computable form. Widespread use of phylotastic systems will create an electronic marketplace for sharing phylogenetic knowledge that will spur innovation in other areas of the ToL enterprise, such as annotation of sources and methods and third-party methods of quality assessment.
Physical Activity, Central Adiposity, and Functional Limitations in Community-Dwelling Older Adults.
Resumo:
BACKGROUND AND PURPOSE: Obesity and physical inactivity are independently associated with physical and functional limitations in older adults. The current study examines the impact of physical activity on odds of physical and functional limitations in older adults with central and general obesity. METHODS: Data from 6279 community-dwelling adults aged 60 years or more from the Health and Retirement Study 2006 and 2008 waves were used to calculate prevalence and odds of physical and functional limitation among obese older adults with high waist circumference (waist circumference ≥88 cm in females and ≥102 cm in males) who were physically active versus inactive (engaging in moderate/vigorous activity less than once per week). Logistic regression models were adjusted for age, sex, race/ethnicity, education, smoking status, body mass index, and number of comorbidities. RESULTS: Physical activity was associated with lower odds of physical and functional limitations among older adults with high waist circumference (odds ratio [OR], 0.59; confidence interval [CI], 0.52-0.68, for physical limitations; OR, 0.52; CI, 0.44-0.62, for activities of daily living; and OR, 0.44; CI, 0.39-0.50, for instrumental activities of daily living). CONCLUSIONS: Physical activity is associated with significantly lower odds of physical and functional limitations in obese older adults regardless of how obesity is classified. Additional research is needed to determine whether physical activity moderates long-term physical and functional limitations.