905 resultados para Negative dimensional integration method (NDIM)
Resumo:
After decades of development in programming languages and programming environments, Smalltalk is still one of few environments that provide advanced features and is still widely used in the industry. However, as Java became prevalent, the ability to call Java code from Smalltalk and vice versa becomes important. Traditional approaches to integrate the Java and Smalltalk languages are through low-level communication between separate Java and Smalltalk virtual machines. We are not aware of any attempt to execute and integrate the Java language directly in the Smalltalk environment. A direct integration allows for very tight and almost seamless integration of the languages and their objects within a single environment. Yet integration and language interoperability impose challenging issues related to method naming conventions, method overloading, exception handling and thread-locking mechanisms. In this paper we describe ways to overcome these challenges and to integrate Java into the Smalltalk environment. Using techniques described in this paper, the programmer can call Java code from Smalltalk using standard Smalltalk idioms while the semantics of each language remains preserved. We present STX:LIBJAVA - an implementation of Java virtual machine within Smalltalk/X - as a validation of our approach
Resumo:
BACKGROUND: Drugs are routinely combined in anesthesia and pain management to obtain an enhancement of the desired effects. However, a parallel enhancement of the undesired effects might take place as well, resulting in a limited therapeutic usefulness. Therefore, when addressing the question of optimal drug combinations, side effects must be taken into account. METHODS: By extension of a previously published interaction model, the authors propose a method to study drug interactions considering also their side effects. A general outcome parameter identified as patient's well-being is defined by superposition of positive and negative effects. Well-being response surfaces are computed and analyzed for varying drugs pharmacodynamics and interaction types. In particular, the existence of multiple maxima and of optimal drug combinations is investigated for the combination of two drugs. RESULTS: Both drug pharmacodynamics and interaction type affect the well-being surface and the deriving optimal combinations. The effect of the interaction parameters can be explained in terms of synergy and antagonism and remains unchanged for varying pharmacodynamics. For all simulations performed for the combination of two drugs, the presence of more than one maximum was never observed. CONCLUSIONS: The model is consistent with clinical knowledge and supports previously published experimental results on optimal drug combinations. This new framework improves understanding of the characteristics of drug combinations used in clinical practice and can be used in clinical research to identify optimal drug dosing.
Resumo:
Generalized linear mixed models (GLMM) are generalized linear models with normally distributed random effects in the linear predictor. Penalized quasi-likelihood (PQL), an approximate method of inference in GLMMs, involves repeated fitting of linear mixed models with “working” dependent variables and iterative weights that depend on parameter estimates from the previous cycle of iteration. The generality of PQL, and its implementation in commercially available software, has encouraged the application of GLMMs in many scientific fields. Caution is needed, however, since PQL may sometimes yield badly biased estimates of variance components, especially with binary outcomes. Recent developments in numerical integration, including adaptive Gaussian quadrature, higher order Laplace expansions, stochastic integration and Markov chain Monte Carlo (MCMC) algorithms, provide attractive alternatives to PQL for approximate likelihood inference in GLMMs. Analyses of some well known datasets, and simulations based on these analyses, suggest that PQL still performs remarkably well in comparison with more elaborate procedures in many practical situations. Adaptive Gaussian quadrature is a viable alternative for nested designs where the numerical integration is limited to a small number of dimensions. Higher order Laplace approximations hold the promise of accurate inference more generally. MCMC is likely the method of choice for the most complex problems that involve high dimensional integrals.
Resumo:
With recent advances in mass spectrometry techniques, it is now possible to investigate proteins over a wide range of molecular weights in small biological specimens. This advance has generated data-analytic challenges in proteomics, similar to those created by microarray technologies in genetics, namely, discovery of "signature" protein profiles specific to each pathologic state (e.g., normal vs. cancer) or differential profiles between experimental conditions (e.g., treated by a drug of interest vs. untreated) from high-dimensional data. We propose a data analytic strategy for discovering protein biomarkers based on such high-dimensional mass-spectrometry data. A real biomarker-discovery project on prostate cancer is taken as a concrete example throughout the paper: the project aims to identify proteins in serum that distinguish cancer, benign hyperplasia, and normal states of prostate using the Surface Enhanced Laser Desorption/Ionization (SELDI) technology, a recently developed mass spectrometry technique. Our data analytic strategy takes properties of the SELDI mass-spectrometer into account: the SELDI output of a specimen contains about 48,000 (x, y) points where x is the protein mass divided by the number of charges introduced by ionization and y is the protein intensity of the corresponding mass per charge value, x, in that specimen. Given high coefficients of variation and other characteristics of protein intensity measures (y values), we reduce the measures of protein intensities to a set of binary variables that indicate peaks in the y-axis direction in the nearest neighborhoods of each mass per charge point in the x-axis direction. We then account for a shifting (measurement error) problem of the x-axis in SELDI output. After these pre-analysis processing of data, we combine the binary predictors to generate classification rules for cancer, benign hyperplasia, and normal states of prostate. Our approach is to apply the boosting algorithm to select binary predictors and construct a summary classifier. We empirically evaluate sensitivity and specificity of the resulting summary classifiers with a test dataset that is independent from the training dataset used to construct the summary classifiers. The proposed method performed nearly perfectly in distinguishing cancer and benign hyperplasia from normal. In the classification of cancer vs. benign hyperplasia, however, an appreciable proportion of the benign specimens were classified incorrectly as cancer. We discuss practical issues associated with our proposed approach to the analysis of SELDI output and its application in cancer biomarker discovery.
Resumo:
Use of microarray technology often leads to high-dimensional and low- sample size data settings. Over the past several years, a variety of novel approaches have been proposed for variable selection in this context. However, only a small number of these have been adapted for time-to-event data where censoring is present. Among standard variable selection methods shown both to have good predictive accuracy and to be computationally efficient is the elastic net penalization approach. In this paper, adaptation of the elastic net approach is presented for variable selection both under the Cox proportional hazards model and under an accelerated failure time (AFT) model. Assessment of the two methods is conducted through simulation studies and through analysis of microarray data obtained from a set of patients with diffuse large B-cell lymphoma where time to survival is of interest. The approaches are shown to match or exceed the predictive performance of a Cox-based and an AFT-based variable selection method. The methods are moreover shown to be much more computationally efficient than their respective Cox- and AFT- based counterparts.
Resumo:
There is an emerging interest in modeling spatially correlated survival data in biomedical and epidemiological studies. In this paper, we propose a new class of semiparametric normal transformation models for right censored spatially correlated survival data. This class of models assumes that survival outcomes marginally follow a Cox proportional hazard model with unspecified baseline hazard, and their joint distribution is obtained by transforming survival outcomes to normal random variables, whose joint distribution is assumed to be multivariate normal with a spatial correlation structure. A key feature of the class of semiparametric normal transformation models is that it provides a rich class of spatial survival models where regression coefficients have population average interpretation and the spatial dependence of survival times is conveniently modeled using the transformed variables by flexible normal random fields. We study the relationship of the spatial correlation structure of the transformed normal variables and the dependence measures of the original survival times. Direct nonparametric maximum likelihood estimation in such models is practically prohibited due to the high dimensional intractable integration of the likelihood function and the infinite dimensional nuisance baseline hazard parameter. We hence develop a class of spatial semiparametric estimating equations, which conveniently estimate the population-level regression coefficients and the dependence parameters simultaneously. We study the asymptotic properties of the proposed estimators, and show that they are consistent and asymptotically normal. The proposed method is illustrated with an analysis of data from the East Boston Ashma Study and its performance is evaluated using simulations.
Resumo:
Visualization and exploratory analysis is an important part of any data analysis and is made more challenging when the data are voluminous and high-dimensional. One such example is environmental monitoring data, which are often collected over time and at multiple locations, resulting in a geographically indexed multivariate time series. Financial data, although not necessarily containing a geographic component, present another source of high-volume multivariate time series data. We present the mvtsplot function which provides a method for visualizing multivariate time series data. We outline the basic design concepts and provide some examples of its usage by applying it to a database of ambient air pollution measurements in the United States and to a hypothetical portfolio of stocks.
Resumo:
This study compared the results of reverse transcription-polymerase chain reaction (RT-PCR) and traditional virus isolation on cell culture in detection of viral haemorrhagic septicaemia virus (VHSV) and infectious haematopoietic necrosis virus (IHNV). RT-PCR was used for 172 tissue sample pools (total of 859 fish) originating from a field survey on the occurrence of VHSV and IHNV in farmed and wild salmonids in Switzerland. These samples represented all sites with fish that were either identified as virus-positive by means of virus isolation (three sites, four positive tissue sample pools) and/or demonstrated positive anti-VHSV-antibody titres (83 sites, 121 positive blood samples) in a serum plaque neutralization test (SPNT). The RT-PCR technique confirmed the four VHSV-positive tissue sample pools detected by virus isolation and additionally identified one VHSV-positive sample that showed positive anti-VHSV-AB titres, but was negative in virus isolation. With IHNV, RT-PCR detected two positive samples not identified by virus isolation while in these fish the SPNT result had been questionable. One of the IHNV-positive samples represents the first detection of IHNV-RNA in wild brown trout in Switzerland. Compared to SPNT, the RT-PCR method detected, as with virus isolation, a much lower number of positive cases; reasons for this discrepancy are discussed. Our results indicate that RT-PCR can not only be successfully applied in field surveys, but may also be slightly more sensitive than virus isolation. However, in a titration experiment under laboratory conditions, the sensitivity of RT-PCR was not significantly higher when compared with virus isolation.
Resumo:
Diabetic nephropathy and end-stage renal failure are still a major cause of mortality amongst patients with diabetes mellitus (DM). In this study, we evaluated the Clinitek-Microalbumin (CM) screening test strip for the detection of microalbuminuria (MA) in a random morning spot urine in comparison with the quantitative assessment of albuminuria in the timed overnight urine collection ("gold standard"). One hundred thirty-four children, adolescents, and young adults with insulin-dependent DM Type 1 were studied at 222 outpatient visits. Because of urinary tract infection and/or haematuria, the data of 13 visits were excluded. Finally, 165 timed overnight urine were collected in the remaining 209 visits (79% sample per visit rate). Ten (6.1%) patients presented MA of > or =15 microg/min. In comparison however, 200 spot urine could be screened (96% sample/visit rate) yielding a significant increase in compliance and screening rate (P<.001, McNemar test). Furthermore, at 156 occasions, the gold standard and CM could be directly compared. The sensitivity and the specificity for CM in the spot urine (cut-off > or =30 mg albumin/l) were 0.89 [95% confidence interval (CI) 0.56-0.99] and 0.73 (CI 0.66-0.80), respectively. The positive and negative predictive value were 0.17 (CI 0.08-0.30) and 0.99 (CI 0.95-1.00), respectively. Considering CM albumin-to-creatinine ratio, the results were poorer than with the albumin concentration alone. Using CM instead of quantitative assessment of albuminuria is not cost-effective (35 US dollars versus 60 US dollars/patient/year). In conclusion, to exclude MA, the CM used in the random spot urine is reliable and easy to handle, but positive screening results of > or =30 mg albumin/l must be confirmed by analyses in the timed overnight collected urine. Although the screening compliance is improved, in terms of analysing random morning spot urine for MA, we cannot recommend CM in a paediatric diabetic outpatient setting because the specificity is far too low.
Resumo:
A CT-based method ("HipMotion") for the noninvasive three-dimensional assessment of femoroacetabular impingement (FAI) was developed, validated, and applied in a clinical pilot study. The method allows for the anatomically based calculation of hip range of motion (ROM), the exact location of the impingement zone, and the simulation of quantified surgical maneuvers for FAI. The accuracy of HipMotion was 0.7 +/- 3.1 degrees in a plastic bone setup and -5.0 +/- 5.6 degrees in a cadaver setup. Reliability and reproducibility were excellent [intraclass correlation coefficient (ICC) > 0.87] for all measures except external rotation (ICC = 0.48). The normal ROM was determined from a cohort of 150 patients and was compared to 31 consecutive hips with FAI. Patients with FAI had a significantly decreased flexion, internal rotation, and abduction in comparison to normal hips (p < 0.001). Normal hip flexion and internal rotation are generally overestimated in a number of orthopedic textbooks. HipMotion is a useful tool for further assessment of impinging hips and for appropriate planning of the necessary amount of surgical intervention, which represents the basis for future computer-assisted treatment of FAI with less invasive surgical approaches, such as hip arthroscopy.
Resumo:
To estimate a parameter in an elliptic boundary value problem, the method of equation error chooses the value that minimizes the error in the PDE and boundary condition (the solution of the BVP having been replaced by a measurement). The estimated parameter converges to the exact value as the measured data converge to the exact value, provided Tikhonov regularization is used to control the instability inherent in the problem. The error in the estimated solution can be bounded in an appropriate quotient norm; estimates can be derived for both the underlying (infinite-dimensional) problem and a finite-element discretization that can be implemented in a practical algorithm. Numerical experiments demonstrate the efficacy and limitations of the method.
Resumo:
The objective of this doctoral research is to investigate the internal frost damage due to crystallization pore pressure in porous cement-based materials by developing computational and experimental characterization tools. As an essential component of the U.S. infrastructure system, the durability of concrete has significant impact on maintenance costs. In cold climates, freeze-thaw damage is a major issue affecting the durability of concrete. The deleterious effects of the freeze-thaw cycle depend on the microscale characteristics of concrete such as the pore sizes and the pore distribution, as well as the environmental conditions. Recent theories attribute internal frost damage of concrete is caused by crystallization pore pressure in the cold environment. The pore structures have significant impact on freeze-thaw durability of cement/concrete samples. The scanning electron microscope (SEM) and transmission X-ray microscopy (TXM) techniques were applied to characterize freeze-thaw damage within pore structure. In the microscale pore system, the crystallization pressures at sub-cooling temperatures were calculated using interface energy balance with thermodynamic analysis. The multi-phase Extended Finite Element Modeling (XFEM) and bilinear Cohesive Zone Modeling (CZM) were developed to simulate the internal frost damage of heterogeneous cement-based material samples. The fracture simulation with these two techniques were validated by comparing the predicted fracture behavior with the captured damage from compact tension (CT) and single-edge notched beam (SEB) bending tests. The study applied the developed computational tools to simulate the internal frost damage caused by ice crystallization with the two dimensional (2-D) SEM and three dimensional (3-D) reconstructed SEM and TXM digital samples. The pore pressure calculated from thermodynamic analysis was input for model simulation. The 2-D and 3-D bilinear CZM predicted the crack initiation and propagation within cement paste microstructure. The favorably predicted crack paths in concrete/cement samples indicate the developed bilinear CZM techniques have the ability to capture crack nucleation and propagation in cement-based material samples with multiphase and associated interface. By comparing the computational prediction with the actual damaged samples, it also indicates that the ice crystallization pressure is the main mechanism for the internal frost damage in cementitious materials.
Resumo:
The numerical solution of the incompressible Navier-Stokes Equations offers an effective alternative to the experimental analysis of Fluid-Structure interaction i.e. dynamical coupling between a fluid and a solid which otherwise is very complex, time consuming and very expensive. To have a method which can accurately model these types of mechanical systems by numerical solutions becomes a great option, since these advantages are even more obvious when considering huge structures like bridges, high rise buildings, or even wind turbine blades with diameters as large as 200 meters. The modeling of such processes, however, involves complex multiphysics problems along with complex geometries. This thesis focuses on a novel vorticity-velocity formulation called the KLE to solve the incompressible Navier-stokes equations for such FSI problems. This scheme allows for the implementation of robust adaptive ODE time integration schemes and thus allows us to tackle the various multiphysics problems as separate modules. The current algorithm for KLE employs a structured or unstructured mesh for spatial discretization and it allows the use of a self-adaptive or fixed time step ODE solver while dealing with unsteady problems. This research deals with the analysis of the effects of the Courant-Friedrichs-Lewy (CFL) condition for KLE when applied to unsteady Stoke’s problem. The objective is to conduct a numerical analysis for stability and, hence, for convergence. Our results confirmthat the time step ∆t is constrained by the CFL-like condition ∆t ≤ const. hα, where h denotes the variable that represents spatial discretization.
Resumo:
The Pacaya volcanic complex is part of the Central American volcanic arc, which is associated with the subduction of the Cocos tectonic plate under the Caribbean plate. Located 30 km south of Guatemala City, Pacaya is situated on the southern rim of the Amatitlan Caldera. It is the largest post-caldera volcano, and has been one of Central America’s most active volcanoes over the last 500 years. Between 400 and 2000 years B.P, the Pacaya volcano had experienced a huge collapse, which resulted in the formation of horseshoe-shaped scarp that is still visible. In the recent years, several smaller collapses have been associated with the activity of the volcano (in 1961 and 2010) affecting its northwestern flanks, which are likely to be induced by the local and regional stress changes. The similar orientation of dry and volcanic fissures and the distribution of new vents would likely explain the reactivation of the pre-existing stress configuration responsible for the old-collapse. This paper presents the first stability analysis of the Pacaya volcanic flank. The inputs for the geological and geotechnical models were defined based on the stratigraphical, lithological, structural data, and material properties obtained from field survey and lab tests. According to the mechanical characteristics, three lithotechnical units were defined: Lava, Lava-Breccia and Breccia-Lava. The Hoek and Brown’s failure criterion was applied for each lithotechnical unit and the rock mass friction angle, apparent cohesion, and strength and deformation characteristics were computed in a specified stress range. Further, the stability of the volcano was evaluated by two-dimensional analysis performed by Limit Equilibrium (LEM, ROCSCIENCE) and Finite Element Method (FEM, PHASE 2 7.0). The stability analysis mainly focused on the modern Pacaya volcano built inside the collapse amphitheatre of “Old Pacaya”. The volcanic instability was assessed based on the variability of safety factor using deterministic, sensitivity, and probabilistic analysis considering the gravitational instability and the effects of external forces such as magma pressure and seismicity as potential triggering mechanisms of lateral collapse. The preliminary results from the analysis provide two insights: first, the least stable sector is on the south-western flank of the volcano; second, the lowest safety factor value suggests that the edifice is stable under gravity alone, and the external triggering mechanism can represent a likely destabilizing factor.