884 resultados para Dunkl Kernel


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This project addresses the unreliability of operating system code, in particular in device drivers. Device driver software is the interface between the operating system and the device's hardware. Device drivers are written in low level code, making them difficult to understand. Almost all device drivers are written in the programming language C which allows for direct manipulation of memory. Due to the complexity of manual movement of data, most mistakes in operating systems occur in device driver code. The programming language Clay can be used to check device driver code at compile-time. Clay does most of its error checking statically to minimize the overhead of run-time checks in order to stay competitive with C's performance time. The Clay compiler can detect a lot more types of errors than the C compiler like buffer overflows, kernel stack overflows, NULL pointer uses, freed memory uses, and aliasing errors. Clay code that successfully compiles is guaranteed to run without failing on errors that Clay can detect. Even though C is unsafe, currently most device drivers are written in it. Not only are device drivers the part of the operating system most likely to fail, they also are the largest part of the operating system. As rewriting every existing device driver in Clay by hand would be impractical, this thesis is part of a project to automate translation of existing drivers from C to Clay. Although C and Clay both allow low level manipulation of data and fill the same niche for developing low level code, they have different syntax, type systems, and paradigms. This paper explores how C can be translated into Clay. It identifies what part of C device drivers cannot be translated into Clay and what information drivers in Clay will require that C cannot provide. It also explains how these translations will occur by explaining how each C structure is represented in the compiler and how these structures are changed to represent a Clay structure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider analytic reproducing kernel Hilbert spaces H with orthonormal bases of the form {(a(n) + b(n)z)z(n) : n >= 0}. If b(n) = 0 for all n, then H is a diagonal space and multiplication by z, M-z, is a weighted shift. Our focus is on providing extensive classes of examples for which M-z is a bounded subnormal operator on a tridiagonal space H where b(n) not equal 0. The Aronszajn sum of H and (1 - z)H where H is either the Hardy space or the Bergman space on the disk are two such examples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate the interplay of smoothness and monotonicity assumptions when estimating a density from a sample of observations. The nonparametric maximum likelihood estimator of a decreasing density on the positive half line attains a rate of convergence at a fixed point if the density has a negative derivative. The same rate is obtained by a kernel estimator, but the limit distributions are different. If the density is both differentiable and known to be monotone, then a third estimator is obtained by isotonization of a kernel estimator. We show that this again attains the rate of convergence and compare the limit distributors of the three types of estimators. It is shown that both isotonization and smoothing lead to a more concentrated limit distribution and we study the dependence on the proportionality constant in the bandwidth. We also show that isotonization does not change the limit behavior of a kernel estimator with a larger bandwidth, in the case that the density is known to have more than one derivative.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper considers a wide class of semiparametric problems with a parametric part for some covariate effects and repeated evaluations of a nonparametric function. Special cases in our approach include marginal models for longitudinal/clustered data, conditional logistic regression for matched case-control studies, multivariate measurement error models, generalized linear mixed models with a semiparametric component, and many others. We propose profile-kernel and backfitting estimation methods for these problems, derive their asymptotic distributions, and show that in likelihood problems the methods are semiparametric efficient. While generally not true, with our methods profiling and backfitting are asymptotically equivalent. We also consider pseudolikelihood methods where some nuisance parameters are estimated from a different algorithm. The proposed methods are evaluated using simulation studies and applied to the Kenya hemoglobin data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In environmental epidemiology, exposure X and health outcome Y vary in space and time. We present a method to diagnose the possible influence of unmeasured confounders U on the estimated effect of X on Y and to propose several approaches to robust estimation. The idea is to use space and time as proxy measures for the unmeasured factors U. We start with the time series case where X and Y are continuous variables at equally-spaced times and assume a linear model. We define matching estimator b(u)s that correspond to pairs of observations with specific lag u. Controlling for a smooth function of time, St, using a kernel estimator is roughly equivalent to estimating the association with a linear combination of the b(u)s with weights that involve two components: the assumptions about the smoothness of St and the normalized variogram of the X process. When an unmeasured confounder U exists, but the model otherwise correctly controls for measured confounders, the excess variation in b(u)s is evidence of confounding by U. We use the plot of b(u)s versus lag u, lagged-estimator-plot (LEP), to diagnose the influence of U on the effect of X on Y. We use appropriate linear combination of b(u)s or extrapolate to b(0) to obtain novel estimators that are more robust to the influence of smooth U. The methods are extended to time series log-linear models and to spatial analyses. The LEP plot gives us a direct view of the magnitude of the estimators for each lag u and provides evidence when models did not adequately describe the data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We aimed at assessing stent geometry and in-stent contrast attenuation with 64-slice CT in patients with various coronary stents. Twenty-nine patients (mean age 60 +/- 11 years; 24 men) with 50 stents underwent CT within 2 weeks after stent placement. Mean in-stent luminal diameter and reference vessel diameter proximal and distal to the stent were assessed with CT, and compared to quantitative coronary angiography (QCA). Stent length was also compared to the manufacturer's values. Images were reconstructed using a medium-smooth (B30f) and sharp (B46f) kernel. All 50 stents could be visualized with CT. Mean in-stent luminal diameter was systematically underestimated with CT compared to QCA (1.60 +/- 0.39 mm versus 2.49 +/- 0.45 mm; P < 0.0001), resulting in a modest correlation of QCA versus CT (r = 0.49; P < 0.0001). Stent length as given by the manufacturer was 18.2 +/- 6.2 mm, correlating well with CT (18.5 +/- 5.7 mm; r = 0.95; P < 0.0001) and QCA (17.4 +/- 5.6 mm; r = 0.87; P < 0.0001). Proximal and distal reference vessel diameters were similar with CT and QCA (P = 0.06 and P = 0.03). B46f kernel images showed higher image noise (P < 0.05) and lower in-stent CT attenuation values (P < 0.001) than images reconstructed with the B30f kernel. 64-slice CT allows measurement of coronary artery in-stent density, and significantly underestimates the true in-stent diameter compared to QCA.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A previously presented algorithm for the reconstruction of bremsstrahlung spectra from transmission data has been implemented into MATHEMATICA. Spectra vectorial algebra has been used to solve the matrix system A * F = T. The new implementation has been tested by reconstructing photon spectra from transmission data acquired in narrow beam conditions, for nominal energies of 6, 15, and 25 MV. The results were in excellent agreement with the original calculations. Our implementation has the advantage to be based on a well-tested mathematical kernel. Furthermore it offers a comfortable user interface.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Constructing a 3D surface model from sparse-point data is a nontrivial task. Here, we report an accurate and robust approach for reconstructing a surface model of the proximal femur from sparse-point data and a dense-point distribution model (DPDM). The problem is formulated as a three-stage optimal estimation process. The first stage, affine registration, is to iteratively estimate a scale and a rigid transformation between the mean surface model of the DPDM and the sparse input points. The estimation results of the first stage are used to establish point correspondences for the second stage, statistical instantiation, which stably instantiates a surface model from the DPDM using a statistical approach. This surface model is then fed to the third stage, kernel-based deformation, which further refines the surface model. Handling outliers is achieved by consistently employing the least trimmed squares (LTS) approach with a roughly estimated outlier rate in all three stages. If an optimal value of the outlier rate is preferred, we propose a hypothesis testing procedure to automatically estimate it. We present here our validations using four experiments, which include 1 leave-one-out experiment, 2 experiment on evaluating the present approach for handling pathology, 3 experiment on evaluating the present approach for handling outliers, and 4 experiment on reconstructing surface models of seven dry cadaver femurs using clinically relevant data without noise and with noise added. Our validation results demonstrate the robust performance of the present approach in handling outliers, pathology, and noise. An average 95-percentile error of 1.7-2.3 mm was found when the present approach was used to reconstruct surface models of the cadaver femurs from sparse-point data with noise added.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Correspondence establishment is a key step in statistical shape model building. There are several automated methods for solving this problem in 3D, but they usually can only handle objects with simple topology, like that of a sphere or a disc. We propose an extension to correspondence establishment over a population based on the optimization of the minimal description length function, allowing considering objects with arbitrary topology. Instead of using a fixed structure of kernel placement on a sphere for the systematic manipulation of point landmark positions, we rely on an adaptive, hierarchical organization of surface patches. This hierarchy can be built on surfaces of arbitrary topology and the resulting patches are used as a basis for a consistent, multi-scale modification of the surfaces' parameterization, based on point distribution models. The feasibility of the approach is demonstrated on synthetic models with different topologies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Northern hardwood management was assessed throughout the state of Michigan using data collected on recently harvested stands in 2010 and 2011. Methods of forensic estimation of diameter at breast height were compared and an ideal, localized equation form was selected for use in reconstructing pre-harvest stand structures. Comparisons showed differences in predictive ability among available equation forms which led to substantial financial differences when used to estimate the value of removed timber. Management on all stands was then compared among state, private, and corporate landowners. Comparisons of harvest intensities against a liberal interpretation of a well-established management guideline showed that approximately one third of harvests were conducted in a manner which may imply that the guideline was followed. One third showed higher levels of removals than recommended, and one third of harvests were less intensive than recommended. Multiple management guidelines and postulated objectives were then synthesized into a novel system of harvest taxonomy, against which all harvests were compared. This further comparison showed approximately the same proportions of harvests, while distinguishing sanitation cuts and the future productive potential of harvests cut more intensely than suggested by guidelines. Stand structures are commonly represented using diameter distributions. Parametric and nonparametric techniques for describing diameter distributions were employed on pre-harvest and post-harvest data. A common polynomial regression procedure was found to be highly sensitive to the method of histogram construction which provides the data points for the regression. The discriminative ability of kernel density estimation was substantially different from that of the polynomial regression technique.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The developmental processes and functions of an organism are controlled by the genes and the proteins that are derived from these genes. The identification of key genes and the reconstruction of gene networks can provide a model to help us understand the regulatory mechanisms for the initiation and progression of biological processes or functional abnormalities (e.g. diseases) in living organisms. In this dissertation, I have developed statistical methods to identify the genes and transcription factors (TFs) involved in biological processes, constructed their regulatory networks, and also evaluated some existing association methods to find robust methods for coexpression analyses. Two kinds of data sets were used for this work: genotype data and gene expression microarray data. On the basis of these data sets, this dissertation has two major parts, together forming six chapters. The first part deals with developing association methods for rare variants using genotype data (chapter 4 and 5). The second part deals with developing and/or evaluating statistical methods to identify genes and TFs involved in biological processes, and construction of their regulatory networks using gene expression data (chapter 2, 3, and 6). For the first part, I have developed two methods to find the groupwise association of rare variants with given diseases or traits. The first method is based on kernel machine learning and can be applied to both quantitative as well as qualitative traits. Simulation results showed that the proposed method has improved power over the existing weighted sum method (WS) in most settings. The second method uses multiple phenotypes to select a few top significant genes. It then finds the association of each gene with each phenotype while controlling the population stratification by adjusting the data for ancestry using principal components. This method was applied to GAW 17 data and was able to find several disease risk genes. For the second part, I have worked on three problems. First problem involved evaluation of eight gene association methods. A very comprehensive comparison of these methods with further analysis clearly demonstrates the distinct and common performance of these eight gene association methods. For the second problem, an algorithm named the bottom-up graphical Gaussian model was developed to identify the TFs that regulate pathway genes and reconstruct their hierarchical regulatory networks. This algorithm has produced very significant results and it is the first report to produce such hierarchical networks for these pathways. The third problem dealt with developing another algorithm called the top-down graphical Gaussian model that identifies the network governed by a specific TF. The network produced by the algorithm is proven to be of very high accuracy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation represents experimental and numerical investigations of combustion initiation trigged by electrical-discharge-induced plasma within lean and dilute methane air mixture. This research topic is of interest due to its potential to further promote the understanding and prediction of spark ignition quality in high efficiency gasoline engines, which operate with lean and dilute fuel-air mixture. It is specified in this dissertation that the plasma to flame transition is the key process during the spark ignition event, yet it is also the most complicated and least understood procedure. Therefore the investigation is focused on the overlapped periods when plasma and flame both exists in the system. Experimental study is divided into two parts. Experiments in Part I focuses on the flame kernel resulting from the electrical discharge. A number of external factors are found to affect the growth of the flame kernel, resulting in complex correlations between discharge and flame kernel. Heat loss from the flame kernel to code ambient is found to be a dominant factor that quenches the flame kernel. Another experimental focus is on the plasma channel. Electrical discharges into gases induce intense and highly transient plasma. Detailed observation of the size and contents of the discharge-induced plasma channel is performed. Given the complex correlation and the multi-discipline physical/chemical processes involved in the plasma-flame transition, the modeling principle is taken to reproduce detailed transitions numerically with minimum analytical assumptions. Detailed measurement obtained from experimental work facilitates the more accurate description of initial reaction conditions. The novel and unique spark source considering both energy and species deposition is defined in a justified manner, which is the key feature of this Ignition by Plasma (IBP) model. The results of numerical simulation are intuitive and the potential of numerical simulation to better resolve the complex spark ignition mechanism is presented. Meanwhile, imperfections of the IBP model and numerical simulation have been specified and will address future attentions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As object-oriented languages are extended with novel modularization mechanisms, better underlying models are required to implement these high-level features. This paper describes CELL, a language model that builds on delegation-based chains of object fragments. Composition of groups of cells is used: 1) to represent objects, 2) to realize various forms of method lookup, and 3) to keep track of method references. A running prototype of CELL is provided and used to realize the basic kernel of a Smalltalk system. The paper shows, using several examples, how higher-level features such as traits can be supported by the lower-level model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Efficient image blurring techniques based on the pyramid algorithm can be implemented on modern graphics hardware; thus, image blurring with arbitrary blur width is possible in real time even for large images. However, pyramidal blurring methods do not achieve the image quality provided by convolution filters; in particular, the shape of the corresponding filter kernel varies locally, which potentially results in objectionable rendering artifacts. In this work, a new analysis filter is designed that significantly reduces this variation for a particular pyramidal blurring technique. Moreover, the pyramidal blur algorithm is generalized to allow for a continuous variation of the blur width. Furthermore, an efficient implementation for programmable graphics hardware is presented. The proposed method is named “quasi-convolution pyramidal blurring” since the resulting effect is very close to image blurring based on a convolution filter for many applications.