934 resultados para Millionaire Problem, Efficiency, Verifiability, Zero Test, Batch Equation
Resumo:
To estimate a parameter in an elliptic boundary value problem, the method of equation error chooses the value that minimizes the error in the PDE and boundary condition (the solution of the BVP having been replaced by a measurement). The estimated parameter converges to the exact value as the measured data converge to the exact value, provided Tikhonov regularization is used to control the instability inherent in the problem. The error in the estimated solution can be bounded in an appropriate quotient norm; estimates can be derived for both the underlying (infinite-dimensional) problem and a finite-element discretization that can be implemented in a practical algorithm. Numerical experiments demonstrate the efficacy and limitations of the method.
Resumo:
The numerical solution of the incompressible Navier-Stokes Equations offers an effective alternative to the experimental analysis of Fluid-Structure interaction i.e. dynamical coupling between a fluid and a solid which otherwise is very complex, time consuming and very expensive. To have a method which can accurately model these types of mechanical systems by numerical solutions becomes a great option, since these advantages are even more obvious when considering huge structures like bridges, high rise buildings, or even wind turbine blades with diameters as large as 200 meters. The modeling of such processes, however, involves complex multiphysics problems along with complex geometries. This thesis focuses on a novel vorticity-velocity formulation called the KLE to solve the incompressible Navier-stokes equations for such FSI problems. This scheme allows for the implementation of robust adaptive ODE time integration schemes and thus allows us to tackle the various multiphysics problems as separate modules. The current algorithm for KLE employs a structured or unstructured mesh for spatial discretization and it allows the use of a self-adaptive or fixed time step ODE solver while dealing with unsteady problems. This research deals with the analysis of the effects of the Courant-Friedrichs-Lewy (CFL) condition for KLE when applied to unsteady Stoke’s problem. The objective is to conduct a numerical analysis for stability and, hence, for convergence. Our results confirmthat the time step ∆t is constrained by the CFL-like condition ∆t ≤ const. hα, where h denotes the variable that represents spatial discretization.
Resumo:
KIVA is a FORTRAN code developed by Los Alamos national lab to simulate complete engine cycle. KIVA is a flow solver code which is used to perform calculation of properties in a fluid flow field. It involves using various numerical schemes and methods to solve the Navier-Stokes equation. This project involves improving the accuracy of one such scheme by upgrading it to a higher order scheme. The numerical scheme to be modified is used in the critical final stage calculation called as rezoning phase. The primitive objective of this project is to implement a higher order numerical scheme, to validate and verify that the new scheme is better than the existing scheme. The latest version of the KIVA family (KIVA 4) is used for implementing the higher order scheme to support handling the unstructured mesh. The code is validated using the traditional shock tube problem and the results are verified to be more accurate than the existing schemes in reference with the analytical result. The convection test is performed to compare the computational accuracy on convective transfer; it is found that the new scheme has less numerical diffusion compared to the existing schemes. A four valve pentroof engine, an example case of KIVA package is used as application to ensure the stability of the scheme in practical application. The results are compared for the temperature profile. In spite of all the positive results, the numerical scheme implemented has a downside of consuming more CPU time for the computational analysis. The detailed comparison is provided. However, in an overview, the implementation of the higher order scheme in the latest code KIVA 4 is verified to be successful and it gives better results than the existing scheme which satisfies the objective of this project.
Resumo:
OBJECTIVES: Reactivation of latent tuberculosis (TB) in inflammatory bowel disease (IBD) patients treated with antitumor necrosis factor-alpha medication is a serious problem. Currently, TB screening includes chest x-rays and a tuberculin skin test (TST). The interferon-gamma release assay (IGRA) QuantiFERON-TB Gold In-Tube (QFT-G-IT) shows better specificity for diagnosing TB than the skin test. This study evaluates the two test methods among IBD patients. METHODS: Both TST and IGRA were performed on 212 subjects (114 Crohn's disease, 44 ulcerative colitis, 10 indeterminate colitis, 44 controls). RESULTS: Eighty-one percent of IBD patients were under immunosuppressive therapy; 71% of all subjects were vaccinated with Bacille Calmette Guérin; 18% of IBD patients and 43% of controls tested positive with the skin test (P < 0.0001). Vaccinated controls tested positive more often with the skin test (52%) than did vaccinated IBD patients (23%) (P = 0.011). Significantly fewer immunosuppressed patients tested positive with the skin test than did patients not receiving therapy (P = 0.007); 8% of patients tested positive with the QFT-G-IT test (14/168) compared to 9% (4/44) of controls. Test agreement was significantly higher in the controls (P = 0.044) compared to the IBD group. CONCLUSIONS: Agreement between the two test methods is poor in IBD patients. In contrast to the QFT-G-IT test, the TST is negatively influenced by immunosuppressive medication and vaccination status, and should thus be replaced by the IGRA for TB screening in immunosuppressed patients having IBD.
Resumo:
Writing unit tests for legacy systems is a key maintenance task. When writing tests for object-oriented programs, objects need to be set up and the expected effects of executing the unit under test need to be verified. If developers lack internal knowledge of a system, the task of writing tests is non-trivial. To address this problem, we propose an approach that exposes side effects detected in example runs of the system and uses these side effects to guide the developer when writing tests. We introduce a visualization called Test Blueprint, through which we identify what the required fixture is and what assertions are needed to verify the correct behavior of a unit under test. The dynamic analysis technique that underlies our approach is based on both tracing method executions and on tracking the flow of objects at runtime. To demonstrate the usefulness of our approach we present results from two case studies.
Resumo:
The “Declaration on a balanced interpretation of the ‘Three-Step Test’” as such cannot solve the problem of lacking limitations; however, it emphasizes that the existing international legislation does not prohibit further amendments to copyright law. Nations that dispose of the political will are in a position to introduce new limitations. In addition, further international agreements focusing on new limitations may be negotiated among those countries that are ready to do so.
Resumo:
The vector channel spectral function and the dilepton production rate from a QCD plasma at a temperature above a few hundred MeV are evaluated up to next-to-leading order (NLO) including their dependence on a non-zero momentum with respect to the heat bath. The invariant mass of the virtual photon is taken to be in the range K2 ~ (πT)2 ~ (1GeV)2, generalizing previous NLO results valid for K2 ≫ (πT)2. In the opposite regime 0 < K2 ≪ (πT)2 the loop expansion breaks down, but agrees nevertheless in order of magnitude with a previous result obtained through resummations. Ways to test the vector spectral function through comparisons with imaginary-time correlators measured on the lattice are discussed.
Resumo:
The authors review the implicit association test (IAT), its use in marketing, and the methodology and validity issues that surround it. They focus on a validity problem that has not been investigated previously, namely, the impact of cognitive inertia on IAT effects. Cognitive inertia refers to the difficulty in switching from one categorization rule to another, which causes IAT effects to depend on the order of administration of the two IAT blocks. In Study 1, the authors observe an IAT effect when the compatible block precedes the incompatible block but not when it follows the incompatible block. In Studies 2 and 3, the IAT effect changes its sign when the order of the blocks reverses. Cognitive inertia distorts individual IAT scores and diminishes the correlations between IAT scores and predictor variables when the block order is counterbalanced between subjects. Study 4 shows that counterbalancing the block order repeatedly within subjects can eliminate cognitive inertia effects on the individual level. The authors conclude that researchers should either interpret IAT scores at the aggregate level or, if individual IAT scores are of interest, counterbalance the block order repeatedly within subjects.
Resumo:
In the last century, several mathematical models have been developed to calculate blood ethanol concentrations (BAC) from the amount of ingested ethanol and vice versa. The most common one in the field of forensic sciences is Widmark's equation. A drinking experiment with 10 voluntary test persons was performed with a target BAC of 1.2 g/kg estimated using Widmark's equation as well as Watson's factor. The ethanol concentrations in the blood were measured using headspace gas chromatography/flame ionization and additionally with an alcohol Dehydrogenase (ADH)-based method. In a healthy 75-year-old man a distinct discrepancy between the intended and the determined blood ethanol concentration was observed. A blood ethanol concentration of 1.83 g/kg was measured and the man showed signs of intoxication. A possible explanation for the discrepancy is a reduction of the total body water content in older people. The incident showed that caution is advised when using the different mathematical models in aged people. When estimating ethanol concentrations, caution is recommended with calculated results due to potential discrepancies between mathematical models and biological systems
Resumo:
BACKGROUND: Early detection of colorectal cancer through timely follow-up of positive Fecal Occult Blood Tests (FOBTs) remains a challenge. In our previous work, we found 40% of positive FOBT results eligible for colonoscopy had no documented response by a treating clinician at two weeks despite procedures for electronic result notification. We determined if technical and/or workflow-related aspects of automated communication in the electronic health record could lead to the lack of response. METHODS: Using both qualitative and quantitative methods, we evaluated positive FOBT communication in the electronic health record of a large, urban facility between May 2008 and March 2009. We identified the source of test result communication breakdown, and developed an intervention to fix the problem. Explicit medical record reviews measured timely follow-up (defined as response within 30 days of positive FOBT) pre- and post-intervention. RESULTS: Data from 11 interviews and tracking information from 490 FOBT alerts revealed that the software intended to alert primary care practitioners (PCPs) of positive FOBT results was not configured correctly and over a third of positive FOBTs were not transmitted to PCPs. Upon correction of the technical problem, lack of timely follow-up decreased immediately from 29.9% to 5.4% (p<0.01) and was sustained at month 4 following the intervention. CONCLUSION: Electronic communication of positive FOBT results should be monitored to avoid limiting colorectal cancer screening benefits. Robust quality assurance and oversight systems are needed to achieve this. Our methods may be useful for others seeking to improve follow-up of FOBTs in their systems.
Resumo:
BACKGROUND: Given the fragmentation of outpatient care, timely follow-up of abnormal diagnostic imaging results remains a challenge. We hypothesized that an electronic medical record (EMR) that facilitates the transmission and availability of critical imaging results through either automated notification (alerting) or direct access to the primary report would eliminate this problem. METHODS: We studied critical imaging alert notifications in the outpatient setting of a tertiary care Department of Veterans Affairs facility from November 2007 to June 2008. Tracking software determined whether the alert was acknowledged (ie, health care practitioner/provider [HCP] opened the message for viewing) within 2 weeks of transmission; acknowledged alerts were considered read. We reviewed medical records and contacted HCPs to determine timely follow-up actions (eg, ordering a follow-up test or consultation) within 4 weeks of transmission. Multivariable logistic regression models accounting for clustering effect by HCPs analyzed predictors for 2 outcomes: lack of acknowledgment and lack of timely follow-up. RESULTS: Of 123 638 studies (including radiographs, computed tomographic scans, ultrasonograms, magnetic resonance images, and mammograms), 1196 images (0.97%) generated alerts; 217 (18.1%) of these were unacknowledged. Alerts had a higher risk of being unacknowledged when the ordering HCPs were trainees (odds ratio [OR], 5.58; 95% confidence interval [CI], 2.86-10.89) and when dual-alert (>1 HCP alerted) as opposed to single-alert communication was used (OR, 2.02; 95% CI, 1.22-3.36). Timely follow-up was lacking in 92 (7.7% of all alerts) and was similar for acknowledged and unacknowledged alerts (7.3% vs 9.7%; P = .22). Risk for lack of timely follow-up was higher with dual-alert communication (OR, 1.99; 95% CI, 1.06-3.48) but lower when additional verbal communication was used by the radiologist (OR, 0.12; 95% CI, 0.04-0.38). Nearly all abnormal results lacking timely follow-up at 4 weeks were eventually found to have measurable clinical impact in terms of further diagnostic testing or treatment. CONCLUSIONS: Critical imaging results may not receive timely follow-up actions even when HCPs receive and read results in an advanced, integrated electronic medical record system. A multidisciplinary approach is needed to improve patient safety in this area.
Resumo:
An Advanced Planning System (APS) offers support at all planning levels along the supply chain while observing limited resources. We consider an APS for process industries (e.g. chemical and pharmaceutical industries) consisting of the modules network design (for long–term decisions), supply network planning (for medium–term decisions), and detailed production scheduling (for short–term decisions). For each module, we outline the decision problem, discuss the specifi cs of process industries, and review state–of–the–art solution approaches. For the module detailed production scheduling, a new solution approach is proposed in the case of batch production, which can solve much larger practical problems than the methods known thus far. The new approach decomposes detailed production scheduling for batch production into batching and batch scheduling. The batching problem converts the primary requirements for products into individual batches, where the work load is to be minimized. We formulate the batching problem as a nonlinear mixed–integer program and transform it into a linear mixed–binary program of moderate size, which can be solved by standard software. The batch scheduling problem allocates the batches to scarce resources such as processing units, workers, and intermediate storage facilities, where some regular objective function like the makespan is to be minimized. The batch scheduling problem is modelled as a resource–constrained project scheduling problem, which can be solved by an efficient truncated branch–and–bound algorithm developed recently. The performance of the new solution procedures for batching and batch scheduling is demonstrated by solving several instances of a case study from process industries.
Resumo:
The paper deals with batch scheduling problems in process industries where final products arise from several successive chemical or physical transformations of raw materials using multi–purpose equipment. In batch production mode, the total requirements of intermediate and final products are partitioned into batches. The production start of a batch at a given level requires the availability of all input products. We consider the problem of scheduling the production of given batches such that the makespan is minimized. Constraints like minimum and maximum time lags between successive production levels, sequence–dependent facility setup times, finite intermediate storages, production breaks, and time–varying manpower contribute to the complexity of this problem. We propose a new solution approach using models and methods of resource–constrained project scheduling, which (approximately) solves problems of industrial size within a reasonable amount of time.
Resumo:
The paleoglaciological concept that during the Pleistocene glacial hemi-cycles a super-large, structurally complex ice sheet developed in the Arctic and behaved as a single dynamic system, as the Antarctic ice sheet does today, has not yet been subjected to concerted studies designed to test the predictions of this concept. Yet, it may hold the keys to solutions of major problems of paleoglaciology, to understanding climate and sea-level changes. The Russian Arctic is the least-known region exposed to paleoglaciation by a hypothetical Arctic ice sheet but now it is more open to testing the concept. Implementation of these tests is a challenging task, as the region is extensive and the available data are controversial. Well-planned and coordinated field projects are needed today, as well as broad discussion of the known evidence, existing interpretations and new field results. Here we present the known evidence for paleoglaciation of the Russian Arctic continental shelf and reconstruct possible marine ice sheets that could have produced that evidence.
Resumo:
Contemporary models of self-regulated learning emphasize the role of distal motivational factors for student's achievement, on the one side, and the proximal role of metacognitive monitoring and control for learning and test outcomes, on the other side. In the present study, two larger samples of elementary school children (9- and 11-year-olds) were included and their mastery-oriented motivation, metacognitive monitoring and control skills were integrated into structural equation models testing and comparing the relative impact of these different constituents for self-regulated learning. For one, results indicate that the factorial structure of monitoring, control and mastery motivation was invariant across the two age groups. Of specific interest was the finding that there were age-dependent structural links between monitoring, control, and test performance (closer links in the older compared to the younger children), with high confidence yielding a direct and positive effect on test performance and a direct and negative effect on adequate control behavior in the achievement test. Mastery-oriented motivation was not found to be substantially associated with monitoring (confidence), control (detection and correction of errors), or test performance underlining the importance of proximal, metacognitive factors for test performance in elementary school children.