963 resultados para numerical integration methods
Resumo:
OBJECTIVE: In a prospective study we investigated whether numerical and functional changes of CD4+CD25(high) regulatory T cells (Treg) were associated with changes of disease activity observed during pregnancy and post partum in patients with rheumatoid arthritis (RA). METHODS: The frequency of CD4+CD25(high) T cells was determined by flow cytometry in 12 patients with RA and 14 healthy women during and after pregnancy. Fluorescence-activated cell sorting (FACS) was used to sort CD4+CD25(high) T cells and CD4+CD25- T cells were stimulated with anti-CD3 and anti-CD28 monoclonal antibodies alone or in co-culture to investigate proliferation and cytokine secretion. RESULTS: Frequencies of CD4+CD25(high) Treg were significantly higher in the third trimester compared to 8 weeks post partum in patients and controls. Numbers of CD4+CD25(high) Treg inversely correlated with disease activity in the third trimester and post partum. In co-culture experiments significantly higher amounts of IL10 and lowered levels of tumour necrosis factor (TNF)alpha and interferon (IFN)gamma were found in supernatants of the third trimester compared to postpartum samples. These findings were independent from health or disease in pregnancy, however postpartum TNFalpha and IFN gamma levels were higher in patients with disease flares. CONCLUSION: The amelioration of disease activity in the third trimester corresponded to the increased number of Treg that induced a pronounced anti-inflammatory cytokine milieu. The pregnancy related quantitative and qualitative changes of Treg suggest a beneficial effect of Treg on disease activity.
Resumo:
The numerical solution of the incompressible Navier-Stokes Equations offers an effective alternative to the experimental analysis of Fluid-Structure interaction i.e. dynamical coupling between a fluid and a solid which otherwise is very complex, time consuming and very expensive. To have a method which can accurately model these types of mechanical systems by numerical solutions becomes a great option, since these advantages are even more obvious when considering huge structures like bridges, high rise buildings, or even wind turbine blades with diameters as large as 200 meters. The modeling of such processes, however, involves complex multiphysics problems along with complex geometries. This thesis focuses on a novel vorticity-velocity formulation called the KLE to solve the incompressible Navier-stokes equations for such FSI problems. This scheme allows for the implementation of robust adaptive ODE time integration schemes and thus allows us to tackle the various multiphysics problems as separate modules. The current algorithm for KLE employs a structured or unstructured mesh for spatial discretization and it allows the use of a self-adaptive or fixed time step ODE solver while dealing with unsteady problems. This research deals with the analysis of the effects of the Courant-Friedrichs-Lewy (CFL) condition for KLE when applied to unsteady Stoke’s problem. The objective is to conduct a numerical analysis for stability and, hence, for convergence. Our results confirmthat the time step ∆t is constrained by the CFL-like condition ∆t ≤ const. hα, where h denotes the variable that represents spatial discretization.
Resumo:
Wind energy has been one of the most growing sectors of the nation’s renewable energy portfolio for the past decade, and the same tendency is being projected for the upcoming years given the aggressive governmental policies for the reduction of fossil fuel dependency. Great technological expectation and outstanding commercial penetration has shown the so called Horizontal Axis Wind Turbines (HAWT) technologies. Given its great acceptance, size evolution of wind turbines over time has increased exponentially. However, safety and economical concerns have emerged as a result of the newly design tendencies for massive scale wind turbine structures presenting high slenderness ratios and complex shapes, typically located in remote areas (e.g. offshore wind farms). In this regard, safety operation requires not only having first-hand information regarding actual structural dynamic conditions under aerodynamic action, but also a deep understanding of the environmental factors in which these multibody rotating structures operate. Given the cyclo-stochastic patterns of the wind loading exerting pressure on a HAWT, a probabilistic framework is appropriate to characterize the risk of failure in terms of resistance and serviceability conditions, at any given time. Furthermore, sources of uncertainty such as material imperfections, buffeting and flutter, aeroelastic damping, gyroscopic effects, turbulence, among others, have pleaded for the use of a more sophisticated mathematical framework that could properly handle all these sources of indetermination. The attainable modeling complexity that arises as a result of these characterizations demands a data-driven experimental validation methodology to calibrate and corroborate the model. For this aim, System Identification (SI) techniques offer a spectrum of well-established numerical methods appropriated for stationary, deterministic, and data-driven numerical schemes, capable of predicting actual dynamic states (eigenrealizations) of traditional time-invariant dynamic systems. As a consequence, it is proposed a modified data-driven SI metric based on the so called Subspace Realization Theory, now adapted for stochastic non-stationary and timevarying systems, as is the case of HAWT’s complex aerodynamics. Simultaneously, this investigation explores the characterization of the turbine loading and response envelopes for critical failure modes of the structural components the wind turbine is made of. In the long run, both aerodynamic framework (theoretical model) and system identification (experimental model) will be merged in a numerical engine formulated as a search algorithm for model updating, also known as Adaptive Simulated Annealing (ASA) process. This iterative engine is based on a set of function minimizations computed by a metric called Modal Assurance Criterion (MAC). In summary, the Thesis is composed of four major parts: (1) development of an analytical aerodynamic framework that predicts interacted wind-structure stochastic loads on wind turbine components; (2) development of a novel tapered-swept-corved Spinning Finite Element (SFE) that includes dampedgyroscopic effects and axial-flexural-torsional coupling; (3) a novel data-driven structural health monitoring (SHM) algorithm via stochastic subspace identification methods; and (4) a numerical search (optimization) engine based on ASA and MAC capable of updating the SFE aerodynamic model.
Resumo:
BACKGROUND: Eosinophil differentiation, activation, and survival are largely regulated by IL-5. IL-5-mediated transmembrane signal transduction involves both Lyn-mitogen-activated protein kinases and Janus kinase 2-signal transducer and activator of transcription pathways. OBJECTIVE: We sought to determine whether additional signaling molecules/pathways are critically involved in IL-5-mediated eosinophil survival. METHODS: Eosinophil survival and apoptosis were measured in the presence and absence of IL-5 and defined pharmacologic inhibitors in vitro. The specific role of the serine/threonine kinase proviral integration site for Moloney murine leukemia virus (Pim) 1 was tested by using HIV-transactivator of transcription fusion proteins containing wild-type Pim-1 or a dominant-negative form of Pim-1. The expression of Pim-1 in eosinophils was analyzed by means of immunoblotting and immunofluorescence. RESULTS: Although pharmacologic inhibition of phosphatidylinositol-3 kinase (PI3K) by LY294002, wortmannin, or the selective PI3K p110delta isoform inhibitor IC87114 was successful in each case, only LY294002 blocked increased IL-5-mediated eosinophil survival. This suggested that LY294002 inhibited another kinase that is critically involved in this process in addition to PI3K. Indeed, Pim-1 was rapidly and strongly expressed in eosinophils after IL-5 stimulation in vitro and readily detected in eosinophils under inflammatory conditions in vivo. Moreover, by using specific protein transfer, we identified Pim-1 as a critical element in IL-5-mediated antiapoptotic signaling in eosinophils. CONCLUSIONS: Pim-1, but not PI3K, plays a major role in IL-5-mediated antiapoptotic signaling in eosinophils.
Resumo:
This study aims to evaluate whether visualization and integration of the computed tomography (CT) scan of the left atrium (LA) and the esophagus into the three-dimensional (3D) electroanatomical map the day before ablation is accurate compared with integration of an esophagus tag into the electroanatomic LA map visualizing the anatomic relationship during the radiofrequency ablation or whether esophagus movement prohibits esophagus visualization the day before ablation.
Resumo:
High-resolution and highly precise age models for recent lake sediments (last 100–150 years) are essential for quantitative paleoclimate research. These are particularly important for sedimentological and geochemical proxies, where transfer functions cannot be established and calibration must be based upon the relation of sedimentary records to instrumental data. High-precision dating for the calibration period is most critical as it determines directly the quality of the calibration statistics. Here, as an example, we compare radionuclide age models obtained on two high-elevation glacial lakes in the Central Chilean Andes (Laguna Negra: 33°38′S/70°08′W, 2,680 m a.s.l. and Laguna El Ocho: 34°02′S/70°19′W, 3,250 m a.s.l.). We show the different numerical models that produce accurate age-depth chronologies based on 210Pb profiles, and we explain how to obtain reduced age-error bars at the bottom part of the profiles, i.e., typically around the end of the 19th century. In order to constrain the age models, we propose a method with five steps: (i) sampling at irregularly-spaced intervals for 226Ra, 210Pb and 137Cs depending on the stratigraphy and microfacies, (ii) a systematic comparison of numerical models for the calculation of 210Pb-based age models: constant flux constant sedimentation (CFCS), constant initial concentration (CIC), constant rate of supply (CRS) and sediment isotope tomography (SIT), (iii) numerical constraining of the CRS and SIT models with the 137Cs chronomarker of AD 1964 and, (iv) step-wise cross-validation with independent diagnostic environmental stratigraphic markers of known age (e.g., volcanic ash layer, historical flood and earthquakes). In both examples, we also use airborne pollutants such as spheroidal carbonaceous particles (reflecting the history of fossil fuel emissions), excess atmospheric Cu deposition (reflecting the production history of a large local Cu mine), and turbidites related to historical earthquakes. Our results show that the SIT model constrained with the 137Cs AD 1964 peak performs best over the entire chronological profile (last 100–150 years) and yields the smallest standard deviations for the sediment ages. Such precision is critical for the calibration statistics, and ultimately, for the quality of the quantitative paleoclimate reconstruction. The systematic comparison of CRS and SIT models also helps to validate the robustness of the chronologies in different sections of the profile. Although surprisingly poorly known and under-explored in paleolimnological research, the SIT model has a great potential in paleoclimatological reconstructions based on lake sediments
Resumo:
The integration of academic and non-academic knowledge is a key concern for researchers who aim at bridging the gap between research and policy. Researchers involved in the sustainability-oriented NCCR North-South programme have made the experience that linking different types of knowledge requires time and effort, and that methodologies are still lacking. One programme component was created at the inception of this transdisciplinary research programme to support exchange between researchers, development practitioners and policymakers. After 8 years of research, the programme is assessing whether research has indeed enabled a continuous communication across and beyond academic boundaries and has effected changes in the public policies of poor countries. In a first review of the data, we selected two case studies explicitly addressing the lives of women. In both cases – one in Pakistan, the other in Nepal – the dialogue between researchers and development practitioners contributed to important policy changes for female migration. In both countries, outmigration has become an increasingly important livelihood strategy. National migration policies are gendered, limiting the international migration of women. In Nepal, women were not allowed to migrate to specific countries such as the Gulf States or Malaysia. This was done in the name of positive discrimination, to protect women from potential exploitation and harassment in domestic work. However, women continued to migrate in many other and often illegal and more risky ways, increasing their vulnerability. In Pakistan, female labour migration was not allowed at all and male migration increased the vulnerability of the families remaining back home. Researchers and development practitioners in Nepal and Pakistan brought women’s shared experience of and exposure to the mechanisms of male domination into the public debate, and addressed the discriminating laws. Now, for the first time in Pakistan, the new draft policy currently under discussion would enable broadly-based female labour migration. What can we learn from the two case studies with regard to ways of relating experience- and research-based knowledge? The paper offers insights into the sequence of interactions between researchers, local people, development practitioners, and policy-makers, which eventually contributed to the formulation of a rights-based migration policy. The reflection aims at exploring the gendered dimension of ways to co-produce and share knowledge for development across boundaries. Above all, it should help researchers to better tighten the links between the spheres of research and policy in future.
Resumo:
In recent years interactive media and tools, like scientific simulations and simulation environments or dynamic data visualizations, became established methods in the neural and cognitive sciences. Hence, university teachers of neural and cognitive sciences are faced with the challenge to integrate these media into the neuroscientific curriculum. Especially simulations and dynamic visualizations offer great opportunities for teachers and learners, since they are both illustrative and explorable. However, simulations bear instructional problems: they are abstract, demand some computer skills and conceptual knowledge about what simulations intend to explain. By following two central questions this article provides an overview on possible approaches to be applied in neuroscience education and opens perspectives for their curricular integration: (i) How can complex scientific media be transformed for educational use in an efficient and (for students on all levels) comprehensible manner and (ii) by what technical infrastructure can this transformation be supported? Exemplified by educational simulations for the neurosciences and their application in courses, answers to these questions are proposed a) by introducing a specific educational simulation approach for the neurosciences b) by introducing an e-learning environment for simulations, and c) by providing examples of curricular integration on different levels which might help academic teachers to integrate newly created or existing interactive educational resources in their courses.
Resumo:
Die generativen Fertigungsverfahren haben längst ihren Platz in der Wertschöpfungskette neben konventionellen Prozessen eingenommen. Trotzdem müssen die Anwender immer wieder aufs Neue für die recht abstrakten Möglichkeiten und Chancen der Verfahren sensibilisiert werden. Querdenken kann oft schneller und effizienter zur erfolgreichen Problemlösung beitragen als traditionelle Schlüsselwege. Deshalb soll der Beitrag einige Kernpunkte ansprechen, die die additiven Verfahren in den Unternahmen – speziell das LaserCUSING® - als überaus sinnvolle Ergänzung des Technologieparks erachten. Neben der Herstellung von metallischen Prototypen geht der Vortrag insbesondere auf die Vielfalt der Effekte integrierbarer Kanäle in Formeinsätzen besonders des Spritzgusswerkzeugbaus ein.
Resumo:
In dem vorliegenden Beitrag wird ein Ansatz zur Integration von Energiekosten in bestehende Fertigungssteuerungsverfahren vorgestellt. Das entwickelte Verfahren basiert auf dem Ansatz der Belastungsorientierten Auftragsfreigabe (BOA) und berücksichtigt schwankende Strompreise aufgrund der zunehmenden Einspeisung regenerativer Energien in das Stromnetz. Die Weiterentwicklung ermöglicht besonders kleinen und mittleren Unternehmen (KMU) die Einsparung von Energiekosten durch organisatorische Maßnahmen der Fertigungssteuerung ohne kapitalintensive Investitionen.
Resumo:
ABSTRACT ONTOLOGIES AND METHODS FOR INTEROPERABILITY OF ENGINEERING ANALYSIS MODELS (EAMS) IN AN E-DESIGN ENVIRONMENT SEPTEMBER 2007 NEELIMA KANURI, B.S., BIRLA INSTITUTE OF TECHNOLOGY AND SCIENCES PILANI INDIA M.S., UNIVERSITY OF MASSACHUSETTS AMHERST Directed by: Professor Ian Grosse Interoperability is the ability of two or more systems to exchange and reuse information efficiently. This thesis presents new techniques for interoperating engineering tools using ontologies as the basis for representing, visualizing, reasoning about, and securely exchanging abstract engineering knowledge between software systems. The specific engineering domain that is the primary focus of this report is the modeling knowledge associated with the development of engineering analysis models (EAMs). This abstract modeling knowledge has been used to support integration of analysis and optimization tools in iSIGHT FD , a commercial engineering environment. ANSYS , a commercial FEA tool, has been wrapped as an analysis service available inside of iSIGHT-FD. Engineering analysis modeling (EAM) ontology has been developed and instantiated to form a knowledge base for representing analysis modeling knowledge. The instances of the knowledge base are the analysis models of real world applications. To illustrate how abstract modeling knowledge can be exploited for useful purposes, a cantilever I-Beam design optimization problem has been used as a test bed proof-of-concept application. Two distinct finite element models of the I-beam are available to analyze a given beam design- a beam-element finite element model with potentially lower accuracy but significantly reduced computational costs and a high fidelity, high cost, shell-element finite element model. The goal is to obtain an optimized I-beam design at minimum computational expense. An intelligent KB tool was developed and implemented in FiPER . This tool reasons about the modeling knowledge to intelligently shift between the beam and the shell element models during an optimization process to select the best analysis model for a given optimization design state. In addition to improved interoperability and design optimization, methods are developed and presented that demonstrate the ability to operate on ontological knowledge bases to perform important engineering tasks. One such method is the automatic technical report generation method which converts the modeling knowledge associated with an analysis model to a flat technical report. The second method is a secure knowledge sharing method which allocates permissions to portions of knowledge to control knowledge access and sharing. Both the methods acting together enable recipient specific fine grain controlled knowledge viewing and sharing in an engineering workflow integration environment, such as iSIGHT-FD. These methods together play a very efficient role in reducing the large scale inefficiencies existing in current product design and development cycles due to poor knowledge sharing and reuse between people and software engineering tools. This work is a significant advance in both understanding and application of integration of knowledge in a distributed engineering design framework.
Resumo:
A 3-year study, using 84 fall-born and 28 spring-born calves of similar genotypes, was conducted to integrate pasturing systems with drylot feeding systems. Calves were started on test following weaning in May and October. Seven treatments were imposed: 1) fall-born calves directly into feedlot; 2 and 3) fall-born calves put on pasture with or without ionophore and moved to the feedlot at the end of July; 4 and 5) fall-born calves put on pasture with or without ionophore and moved to the feedlot at the end of October; 6 and 7) spring-born calves put on pasture with or without ionophore and moved to the feedlot at the end of October. A bromegrass pasture consisting of 16 paddocks, each 1.7 acre in size, was available. Each treatment group had access to 1 paddock at a time and was rotated at approximately 3-day intervals. In the feedlot, steers were provided an 82% concentrate diet containing whole-shelled corn, ground alfalfa hay, and a protein, vitamin and mineral supplement containing ionophore and molasses. As pens of cattle reached about 1150 lb. average live weight, they were processed and carcass traits were evaluated. Pasture daily gains were highest for cattle on pasture for the longest duration (P < .03), and overall daily gains were highest for drylot cattle (P < .01) and decreased with increased time spent on pasture. Although differences among treatments existed in numerical scores for yield and quality grades (P < .05 and P < .03, respectively), all treatments provided average yield grade scores of 2 and quality grades of low Choice or higher. Use of four production costs and pricing scenarios revealed that fall-born calves placed on pasture for varying lengths of time were the most profitable (P < .04) among the treatments. Furthermore, employing a 5% price sensitivity analysis, indicated that fed-cattle selling price had great impact on profit potential and was followed in importance by feeder purchase price and corn grain price. Overall, these findings should provide significant production alternatives for some segments of the cattle feeding industry and also lend substantial credence to the concept of sustainable agriculture.
Resumo:
BACKGROUND Delayed enhancement (DE) MRI can assess the fibrotic substrate of scar-related VT. MDCT has the advantage of inframillimetric spatial resolution and better 3D reconstructions. We sought to evaluate the feasibility and usefulness of integrating merged MDCT/MRI data in 3D-mapping systems for structure-function assessment and multimodal guidance of VT mapping and ablation. METHODS Nine patients, including 3 ischemic cardiomyopathy (ICM), 3 nonischemic cardiomyopathy (NICM), 2 myocarditis, and 1 redo procedure for idiopathic VT, underwent MRI and MDCT before VT ablation. Merged MRI/MDCT data were integrated in 3D-mapping systems and registered to high-density endocardial and epicardial maps. Low-voltage areas (<1.5 mV) and local abnormal ventricular activities (LAVA) during sinus rhythm were correlated to DE at MRI, and wall-thinning (WT) at MDCT. RESULTS Endocardium and epicardium were mapped with 391 ± 388 and 1098 ± 734 points per map, respectively. Registration of MDCT allowed visualization of coronary arteries during epicardial mapping/ablation. In the idiopathic patient, integration of MRI data identified previously ablated regions. In ICM patients, both DE at MRI and WT at MDCT matched areas of low voltage (overlap 94 ± 6% and 79 ± 5%, respectively). In NICM patients, wall-thinning areas matched areas of low voltage (overlap 63 ± 21%). In patients with myocarditis, subepicardial DE matched areas of epicardial low voltage (overlap 92 ± 12%). A total number of 266 LAVA sites were found in 7/9 patients. All LAVA sites were associated to structural substrate at imaging (90% inside, 100% within 18 mm). CONCLUSION The integration of merged MDCT and DEMRI data is feasible and allows combining substrate assessment with high-spatial resolution to better define structure-function relationship in scar-related VT.
Resumo:
We investigate a class of optimal control problems that exhibit constant exogenously given delays in the control in the equation of motion of the differential states. Therefore, we formulate an exemplary optimal control problem with one stock and one control variable and review some analytic properties of an optimal solution. However, analytical considerations are quite limited in case of delayed optimal control problems. In order to overcome these limits, we reformulate the problem and apply direct numerical methods to calculate approximate solutions that give a better understanding of this class of optimization problems. In particular, we present two possibilities to reformulate the delayed optimal control problem into an instantaneous optimal control problem and show how these can be solved numerically with a stateof- the-art direct method by applying Bock’s direct multiple shooting algorithm. We further demonstrate the strength of our approach by two economic examples.