894 resultados para Numerical approximation and analysis
Resumo:
Recent developments in clinical radiology have resulted in additional developments in the field of forensic radiology. After implementation of cross-sectional radiology and optical surface documentation in forensic medicine, difficulties in the validation and analysis of the acquired data was experienced. To address this problem and for the comparison of autopsy and radiological data a centralized database with internet technology for forensic cases was created. The main goals of the database are (1) creation of a digital and standardized documentation tool for forensic-radiological and pathological findings; (2) establishing a basis for validation of forensic cross-sectional radiology as a non-invasive examination method in forensic medicine that means comparing and evaluating the radiological and autopsy data and analyzing the accuracy of such data; and (3) providing a conduit for continuing research and education in forensic medicine. Considering the infrequent availability of CT or MRI for forensic institutions and the heterogeneous nature of case material in forensic medicine an evaluation of benefits and limitations of cross-sectional imaging concerning certain forensic features by a single institution may be of limited value. A centralized database permitting international forensic and cross disciplinary collaborations may provide important support for forensic-radiological casework and research.
Resumo:
PURPOSE: To determine the effect of two pairs of echo times (TEs) for in-phase (IP) and opposed-phase (OP) 3.0-T magnetic resonance (MR) imaging on (a) quantitative analysis prospectively in a phantom study and (b) diagnostic accuracy retrospectively in a clinical study of adrenal tumors, with use of various reference standards in the clinical study. MATERIALS AND METHODS: A fat-saline phantom was used to perform IP and OP 3.0-T MR imaging for various fat fractions. The institutional review board approved this HIPAA-compliant study, with waiver of informed consent. Single-breath-hold IP and OP 3.0-T MR images in 21 patients (14 women, seven men; mean age, 63 years) with 23 adrenal tumors (16 adenomas, six metastases, one adrenocortical carcinoma) were reviewed. The MR protocol involved two acquisition schemes: In scheme A, the first OP echo (approximately 1.5-msec TE) and the second IP echo (approximately 4.9-msec TE) were acquired. In scheme B, the first IP echo (approximately 2.4-msec TE) and the third OP echo (approximately 5.8-msec TE) were acquired. Quantitative analysis was performed, and analysis of variance was used to test for differences between adenomas and nonadenomas. RESULTS: In the phantom study, scheme B did not enable discrimination among voxels that had small amounts of fat. In the clinical study, no overlap in signal intensity (SI) index values between adenomas and nonadenomas was seen (P < .05) with scheme A. However, with scheme B, no overlap in the adrenal gland SI-to-liver SI ratio between adenomas and nonadenomas was seen (P < .05). With scheme B, no overlap in adrenal gland SI index-to-liver SI index ratio between adenomas and nonadenomas was seen (P < .05). CONCLUSION: This initial experience indicates SI index is the most reliable parameter for characterization of adrenal tumors with 3.0-T MR imaging when obtaining OP echo before IP echo. When acquiring IP echo before OP echo, however, nonadenomas can be mistaken as adenomas with use of the SI index value.
Resumo:
We hypothesized that the spatial distribution of groundwater inflows through river bottom sediments is a critical factor associated with the selection of coaster brook trout (a life history variant of Salvelinus fontinalis,) spawning sites. An 80-m reach of the Salmon Trout River, in the Huron Mountains of the upper peninsula of Michigan, was selected to test the hypothesis based on long-term documentation of coaster brook trout spawning at this site. Throughout this site, the river is relatively similar along its length with regard to stream channel and substrate features. A monitoring well system consisting of an array of 27 wells was installed to measure subsurface temperatures underneath the riverbed over a 13-month period. The monitoring well locations were separated into areas where spawning has and has not been observed. Over 200,000 total temperature measurements were collected from 5 depths within each of the 27 monitoring wells. Temperatures within the substrate at the spawning area were generally cooler and less variable than river temperatures. Substrate temperatures in the non-spawning area were generally warmer, more variable, and closely tracked temporal variations in river temperatures. Temperature data were inverted to obtain subsurface groundwater velocities using a numerical approximation of the heat transfer equation. Approximately 45,000 estimates of groundwater velocities were obtained. Estimated velocities in the spawning and non-spawning areas confirmed that groundwater velocities in the spawning area were primarily in the upward direction, and were generally greater in magnitude than velocities in the non-spawning area. In the non-spawning area there was a greater occurrence of velocities in the downward direction, and velocity estimates were generally lesser in magnitude than in the spawning area. Both the temperature and velocity results confirm the hypothesis that spawning sites correspond to areas of significant groundwater influx to the river bed.
Resumo:
Erosion of dentine causes mineral dissolution, while the organic compounds remain at the surface. Therefore, a determination of tissue loss is complicated. Established quantitative methods for the evaluation of enamel have also been used for dentine, but the suitability of these techniques in this field has not been systematically determined. Therefore, this study aimed to compare longitudinal microradiography (LMR), contacting (cPM) and non-contacting profilometry (ncPM), and analysis of dissolved calcium (Ca analysis) in the erosion solution. Results are discussed in the light of the histology of dentine erosion. Erosion was performed with 0.05 M citric acid (pH 2.5) for 30, 60, 90 or 120 min, and erosive loss was determined by each method. LMR, cPM and ncPM were performed before and after collagenase digestion of the demineralised organic surface layer, with an emphasis on moisture control. Scanning electron microscopy was performed on randomly selected specimens. All measurements were converted into micrometres. Profilometry was not suitable to adequately quantify mineral loss prior to collagenase digestion. After 120 min of erosion, values of 5.4 +/- 1.9 microm (ncPM) and 27.8 +/- 4.6 microm (cPM) were determined. Ca analysis revealed a mineral loss of 55.4 +/- 11.5 microm. The values for profilometry after matrix digestion were 43.0 +/- 5.5 microm (ncPM) and 46.9 +/- 6.2 (cPM). Relative and proportional biases were detected for all method comparisons. The mineral loss values were below the detection limit for LMR. The study revealed gross differences between methods, particularly when demineralised organic surface tissue was present. These results indicate that the choice of method is critical and depends on the parameter under study.
Resumo:
Several strategies relying on kriging have recently been proposed for adaptively estimating contour lines and excursion sets of functions under severely limited evaluation budget. The recently released R package KrigInv 3 is presented and offers a sound implementation of various sampling criteria for those kinds of inverse problems. KrigInv is based on the DiceKriging package, and thus benefits from a number of options concerning the underlying kriging models. Six implemented sampling criteria are detailed in a tutorial and illustrated with graphical examples. Different functionalities of KrigInv are gradually explained. Additionally, two recently proposed criteria for batch-sequential inversion are presented, enabling advanced users to distribute function evaluations in parallel on clusters or clouds of machines. Finally, auxiliary problems are discussed. These include the fine tuning of numerical integration and optimization procedures used within the computation and the optimization of the considered criteria.
Resumo:
The goal of this paper is to establish exponential convergence of $hp$-version interior penalty (IP) discontinuous Galerkin (dG) finite element methods for the numerical approximation of linear second-order elliptic boundary-value problems with homogeneous Dirichlet boundary conditions and piecewise analytic data in three-dimensional polyhedral domains. More precisely, we shall analyze the convergence of the $hp$-IP dG methods considered in [D. Schötzau, C. Schwab, T. P. Wihler, SIAM J. Numer. Anal., 51 (2013), pp. 1610--1633] based on axiparallel $\sigma$-geometric anisotropic meshes and $\bm{s}$-linear anisotropic polynomial degree distributions.
Resumo:
We present a comprehensive analytical study of radiative transfer using the method of moments and include the effects of non-isotropic scattering in the coherent limit. Within this unified formalism, we derive the governing equations and solutions describing two-stream radiative transfer (which approximates the passage of radiation as a pair of outgoing and incoming fluxes), flux-limited diffusion (which describes radiative transfer in the deep interior) and solutions for the temperature-pressure profiles. Generally, the problem is mathematically under-determined unless a set of closures (Eddington coefficients) is specified. We demonstrate that the hemispheric (or hemi-isotropic) closure naturally derives from the radiative transfer equation if energy conservation is obeyed, while the Eddington closure produces spurious enhancements of both reflected light and thermal emission. We concoct recipes for implementing two-stream radiative transfer in stand-alone numerical calculations and general circulation models. We use our two-stream solutions to construct toy models of the runaway greenhouse effect. We present a new solution for temperature-pressure profiles with a non-constant optical opacity and elucidate the effects of non-isotropic scattering in the optical and infrared. We derive generalized expressions for the spherical and Bond albedos and the photon deposition depth. We demonstrate that the value of the optical depth corresponding to the photosphere is not always 2/3 (Milne's solution) and depends on a combination of stellar irradiation, internal heat and the properties of scattering both in optical and infrared. Finally, we derive generalized expressions for the total, net, outgoing and incoming fluxes in the convective regime.
Resumo:
This article proposes computing sensitivities of upper tail probabilities of random sums by the saddlepoint approximation. The considered sensitivity is the derivative of the upper tail probability with respect to the parameter of the summation index distribution. Random sums with Poisson or Geometric distributed summation indices and Gamma or Weibull distributed summands are considered. The score method with importance sampling is considered as an alternative approximation. Numerical studies show that the saddlepoint approximation and the method of score with importance sampling are very accurate. But the saddlepoint approximation is substantially faster than the score method with importance sampling. Thus, the suggested saddlepoint approximation can be conveniently used in various scientific problems.
Resumo:
The evolution of porosity due to dissolution/precipitation processes of minerals and the associated change of transport parameters are of major interest for natural geological environments and engineered underground structures. We designed a reproducible and fast to conduct 2D experiment, which is flexible enough to investigate several process couplings implemented in the numerical code OpenGeosys-GEM (OGS-GEM). We investigated advective-diffusive transport of solutes, effect of liquid phase density on advective transport, and kinetically controlled dissolution/precipitation reactions causing porosity changes. In addition, the system allowed to investigate the influence of microscopic (pore scale) processes on macroscopic (continuum scale) transport. A Plexiglas tank of dimension 10 × 10 cm was filled with a 1 cm thick reactive layer consisting of a bimodal grain size distribution of celestite (SrSO4) crystals, sandwiched between two layers of sand. A barium chloride solution was injected into the tank causing an asymmetric flow field to develop. As the barium chloride reached the celestite region, dissolution of celestite was initiated and barite precipitated. Due to the higher molar volume of barite, its precipitation caused a porosity decrease and thus also a decrease in the permeability of the porous medium. The change of flow in space and time was observed via injection of conservative tracers and analysis of effluents. In addition, an extensive post-mortem analysis of the reacted medium was conducted. We could successfully model the flow (with and without fluid density effects) and the transport of conservative tracers with a (continuum scale) reactive transport model. The prediction of the reactive experiments initially failed. Only the inclusion of information from post-mortem analysis gave a satisfactory match for the case where the flow field changed due to dissolution/precipitation reactions. We concentrated on the refinement of post-mortem analysis and the investigation of the dissolution/precipitation mechanisms at the pore scale. Our analytical techniques combined scanning electron microscopy (SEM) and synchrotron X-ray micro-diffraction/micro-fluorescence performed at the XAS beamline (Swiss Light Source). The newly formed phases include an epitaxial growth of barite micro-crystals on large celestite crystals (epitaxial growth) and a nano-crystalline barite phase (resulting from the dissolution of small celestite crystals) with residues of celestite crystals in the pore interstices. Classical nucleation theory, using well-established and estimated parameters describing barite precipitation, was applied to explain the mineralogical changes occurring in our system. Our pore scale investigation showed limits of the continuum scale reactive transport model. Although kinetic effects were implemented by fixing two distinct rates for the dissolution of large and small celestite crystals, instantaneous precipitation of barite was assumed as soon as oversaturation occurred. Precipitation kinetics, passivation of large celestite crystals and metastability of supersaturated solutions, i.e. the conditions under which nucleation cannot occur despite high supersaturation, were neglected. These results will be used to develop a pore scale model that describes precipitation and dissolution of crystals at the pore scale for various transport and chemical conditions. Pore scale modelling can be used to parameterize constitutive equations to introduce pore-scale corrections into macroscopic (continuum) reactive transport models. Microscopic understanding of the system is fundamental for modelling from the pore to the continuum scale.
Resumo:
The ascertainment and analysis of adverse reactions to investigational agents presents a significant challenge because of the infrequency of these events, their subjective nature and the low priority of safety evaluations in many clinical trials. A one year review of antibiotic trials published in medical journals demonstrates the lack of standards in identifying and reporting these potentially fatal conditions. This review also illustrates the low probability of observing and detecting rare events in typical clinical trials which include fewer than 300 subjects. Uniform standards for ascertainment and reporting are suggested which include operational definitions of study subjects. Meta-analysis of selected antibiotic trials using multivariate regression analysis indicates that meaningful conclusions may be drawn from data from multiple studies which are pooled in a scientifically rigorous manner. ^
Resumo:
At issue is whether or not isolated DNA is patent eligible under the U.S. Patent Law and the implications of that determination on public health. The U.S. Patent and Trademark Office has issued patents on DNA since the 1980s, and scientists and researchers have proceeded under that milieu since that time. Today, genetic research and testing related to the human breast cancer genes BRCA1 and BRCA2 is conducted within the framework of seven patents that were issued to Myriad Genetics and the University of Utah Research Foundation between 1997 and 2000. In 2009, suit was filed on behalf of multiple researchers, professional associations and others to invalidate fifteen of the claims underlying those patents. The Court of Appeals for the Federal Circuit, which hears patent cases, has invalidated claims for analyzing and comparing isolated DNA but has upheld claims to isolated DNA. The specific issue of whether isolated DNA is patent eligible is now before the Supreme Court, which is expected to decide the case by year's end. In this work, a systematic review was performed to determine the effects of DNA patents on various stakeholders and, ultimately, on public health; and to provide a legal analysis of the patent eligibility of isolated DNA and the likely outcome of the Supreme Court's decision. ^ A literature review was conducted to: first, identify principle stakeholders with an interest in patent eligibility of the isolated DNA sequences BRCA1 and BRCA2; and second, determine the effect of the case on those stakeholders. Published reports that addressed gene patents, the Myriad litigation, and implications of gene patents on stakeholders were included. Next, an in-depth legal analysis of the patent eligibility of isolated DNA and methods for analyzing it was performed pursuant to accepted methods of legal research and analysis based on legal briefs, federal law and jurisprudence, scholarly works and standard practice legal analysis. ^ Biotechnology, biomedical and clinical research, access to health care, and personalized medicine were identified as the principle stakeholders and interests herein. Many experts believe that the patent eligibility of isolated DNA will not greatly affect the biotechnology industry insofar as genetic testing is concerned; unlike for therapeutics, genetic testing does not require tremendous resources or lead time. The actual impact on biomedical researchers is uncertain, with greater impact expected for researchers whose work is intended for commercial purposes (versus basic science). The impact on access to health care has been surprisingly difficult to assess; while invalidating gene patents might be expected to decrease the cost of genetic testing and improve access to more laboratories and physicians' offices that provide the test, a 2010 study on the actual impact was inconclusive. As for personalized medicine, many experts believe that the availability of personalized medicine is ultimately a public policy issue for Congress, not the courts. ^ Based on the legal analysis performed in this work, this writer believes the Supreme Court is likely to invalidate patents on isolated DNA whose sequences are found in nature, because these gene sequences are a basic tool of scientific and technologic work and patents on isolated DNA would unduly inhibit their future use. Patents on complementary DNA (cDNA) are expected to stand, however, based on the human intervention required to craft cDNA and the product's distinction from the DNA found in nature. ^ In the end, the solution as to how to address gene patents may lie not in jurisprudence but in a fundamental change in business practices to provide expanded licenses to better address the interests of the several stakeholders. ^
Resumo:
Agrobacterium tumefaciens uses the VirB/D4 type IV secretion system (T4SS) to translocate oncogenic DNA (T-DNA) and protein substrates to plant cells. Independent of VirD4, the eleven VirB proteins are also essential for elaboration of a conjugative pilus termed the T pilus. The focus of this thesis is the characterization and analysis of two VirB proteins, VirB6 and VirB9, with respect to substrate translocation and T pilus biogenesis. Observed stabilizing effects of VirB6 on other VirB subunits and results of protein-protein interaction studies suggest that VirB6 mediates assembly of the secretion machine and T pilus through interactions with VirB7 and VirB9. Topology studies support a model for VirB6 as a polytopic membrane protein with a periplasmic N terminus, a large internal periplasmic loop, five transmembrane segments, and a cytoplasmic C terminus. Topology studies and Transfer DNA immunoprecipitation (TrIP) assays identified several important VirB6 functional domains: (i) the large internal periplasmic loop mediates interaction of VirB6 with the T-DNA, (ii) the membrane spanning region carboxyl-terminal to the large periplasmic loop mediates substrate transfer from VirB6 to VirB8, and (iii) the terminal regions of VirB6 are required for substrate transfer to VirB2 and VirB9. To analyze structure-function relationships of VirB9, the phenotypic consequences of dipeptide insertion mutations were characterized. Substrate discriminating mutations were shown to selectively export the oncogenic T-DNA and VirE2 to plant cells or a mobilizable IncQ plasmid to bacterial cells. Mutations affecting VirB9 interactions with VirB7 and VirB10 were localized to the C- and N- terminal regions respectively. Additionally, “uncoupling” mutations identified in VirB11 and VirB6 that block T pilus assembly, but not substrate transfer to recipient cells, were also identified in VirB9. These results in conjunction with computer analysis establish that VirB9, like VirB6, is also composed of distinct regions or domains that contribute in various ways to secretion channel activity and T pilus assembly. Lastly, in vivo immunofluorescent studies suggest that VirB9 localizes to the outer membrane and may play a role similar to that of secretion/ushers of types II and III secretion systems to facilitate substrate translocation across this final bacterial barrier. ^
Resumo:
In September 1999, the International Monetary Fund (IMF) established the Poverty Reduction and Growth Facility (PRGF) to make the reduction of poverty and the enhancement of economic growth the fundamental objectives of lending operations in its poorest member countries. This paper studies the spending and absorption of aid in PRGF-supported programs, verifies whether the use of aid is programmed to be smoothed over time, and analyzes how considerations about macroeconomic stability influence the programmed use of aid. The paper shows that PRGF-supported programs permit countries to utilize all increases in aid within a few years, showing smoothed use of aid inflows over time. Our results reveal that spending is higher than absorption in both the long-run and short-run use of aid, which is a robust finding of the study. Furthermore, the paper demonstrates that the long-run spending exceeds the injected increase of aid inflows in the economy. In addition, the paper finds that the presence of a PRGF-supported program does not influence the actual absorption or spending of aid.
Resumo:
One of the key factors behind the growth in global trade in recent decades is an increase in intermediate input as a result of the development of vertical production networks (Feensta, 1998). It is widely recognized that the formation of production networks is due to the expansion of multinational enterprises' (MNEs) activities. MNEs have been differentiated into two types according to their production structure: horizontal and vertical foreign direct investment (FDI). In this paper, we extend the model presented by Zhang and Markusen (1999) to include horizontal and vertical FDI in a model with traded intermediates, using numerical general equilibrium analysis. The simulation results show that horizontal MNEs are more likely to exist when countries are similar in size and in relative factor endowments. Vertical MNEs are more likely to exist when countries differ in relative factor endowments, and trade costs are positive. From the results of the simulation, lower trade costs of final goods and differences in factor intensity are conditions for attracting vertical MNEs.
Resumo:
Irregular computations pose sorne of the most interesting and challenging problems in automatic parallelization. Irregularity appears in certain kinds of numerical problems and is pervasive in symbolic applications. Such computations often use dynamic data structures, which make heavy use of pointers. This complicates all the steps of a parallelizing compiler, from independence detection to task partitioning and placement. Starting in the mid 80s there has been significant progress in the development of parallelizing compilers for logic programming (and more recently, constraint programming) resulting in quite capable parallelizers. The typical applications of these paradigms frequently involve irregular computations, and make heavy use of dynamic data structures with pointers, since logical variables represent in practice a well-behaved form of pointers. This arguably makes the techniques used in these compilers potentially interesting. In this paper, we introduce in a tutoríal way, sorne of the problems faced by parallelizing compilers for logic and constraint programs and provide pointers to sorne of the significant progress made in the area. In particular, this work has resulted in a series of achievements in the areas of inter-procedural pointer aliasing analysis for independence detection, cost models and cost analysis, cactus-stack memory management, techniques for managing speculative and irregular computations through task granularity control and dynamic task allocation such as work-stealing schedulers), etc.