902 resultados para Eigenvalue Bounds
Resumo:
The purpose of this work is to find a methodology in order to make possible the recycling of fines (0 - 4 mm) in the Construction and Demolition Waste (CDW) process. At the moment this fraction is a not desired by-product: it has high contaminant content, it has to be separated from the coarse fraction, because of its high water absorption which can affect the properties of the concrete. In fact, in some countries the use of fines recycled aggregates is highly restricted or even banned. This work is placed inside the European project C2CA (from Concrete to Cement and Clean Aggregates) and it has been held in the Faculty of Civil Engineering and Geosciences of the Technical University of Delft, in particular, in the laboratory of Resources And Recycling. This research proposes some procedures in order to close the loop of the entire recycling process. After the classification done by ADR (Advanced Dry Recovery) the two fractions "airknife" and "rotor" (that together constitute the fraction 0 - 4 mm) are inserted in a new machine that works at high temperatures. The temperatures analysed in this research are 600 °C and 750 °C, cause at that temperature it is supposed that the cement bounds become very weak. The final goal is "to clean" the coarse fraction (0,250 - 4 mm) from the cement still attached to the sand and try to concentrate the cement paste in the fraction 0 - 0,250 mm. This new set-up is able to dry the material in very few seconds, divide it into two fractions (the coarse one and the fine one) thanks to the air and increase the amount of fines (0 - 0,250 mm) promoting the attrition between the particles through a vibration device. The coarse fraction is then processed in a ball mill in order to improve the result and reach the final goal. Thanks to the high temperature it is possible to markedly reduce the milling time. The sand 0 - 2 mm, after being heated and milled is used to replace 100% of norm sand in mortar production. The results are very promising: the mortar made with recycled sand reaches an early strength, in fact the increment with respect to the mortar made with norm sand is 20% after three days and 7% after seven days. With this research it has been demonstrated that once the temperature is increased it is possible to obtain a clean coarse fraction (0,250 - 4 mm), free from cement paste that is concentrated in the fine fraction 0 - 0,250 mm. The milling time and the drying time can be largely reduced. The recycled sand shows better performance in terms of mechanical properties with respect to the natural one.
Resumo:
A central design challenge facing network planners is how to select a cost-effective network configuration that can provide uninterrupted service despite edge failures. In this paper, we study the Survivable Network Design (SND) problem, a core model underlying the design of such resilient networks that incorporates complex cost and connectivity trade-offs. Given an undirected graph with specified edge costs and (integer) connectivity requirements between pairs of nodes, the SND problem seeks the minimum cost set of edges that interconnects each node pair with at least as many edge-disjoint paths as the connectivity requirement of the nodes. We develop a hierarchical approach for solving the problem that integrates ideas from decomposition, tabu search, randomization, and optimization. The approach decomposes the SND problem into two subproblems, Backbone design and Access design, and uses an iterative multi-stage method for solving the SND problem in a hierarchical fashion. Since both subproblems are NP-hard, we develop effective optimization-based tabu search strategies that balance intensification and diversification to identify near-optimal solutions. To initiate this method, we develop two heuristic procedures that can yield good starting points. We test the combined approach on large-scale SND instances, and empirically assess the quality of the solutions vis-à-vis optimal values or lower bounds. On average, our hierarchical solution approach generates solutions within 2.7% of optimality even for very large problems (that cannot be solved using exact methods), and our results demonstrate that the performance of the method is robust for a variety of problems with different size and connectivity characteristics.
Resumo:
Magnetic resonance spectroscopy enables insight into the chemical composition of spinal cord tissue. However, spinal cord magnetic resonance spectroscopy has rarely been applied in clinical work due to technical challenges, including strong susceptibility changes in the region and the small cord diameter, which distort the lineshape and limit the attainable signal to noise ratio. Hence, extensive signal averaging is required, which increases the likelihood of static magnetic field changes caused by subject motion (respiration, swallowing), cord motion, and scanner-induced frequency drift. To avoid incoherent signal averaging, it would be ideal to perform frequency alignment of individual free induction decays before averaging. Unfortunately, this is not possible due to the low signal to noise ratio of the metabolite peaks. In this article, frequency alignment of individual free induction decays is demonstrated to improve spectral quality by using the high signal to noise ratio water peak from non-water-suppressed proton magnetic resonance spectroscopy via the metabolite cycling technique. Electrocardiography (ECG)-triggered point resolved spectroscopy (PRESS) localization was used for data acquisition with metabolite cycling or water suppression for comparison. A significant improvement in the signal to noise ratio and decrease of the Cramér Rao lower bounds of all metabolites is attained by using metabolite cycling together with frequency alignment, as compared to water-suppressed spectra, in 13 healthy volunteers.
Resumo:
Alternans of cardiac action potential duration (APD) is a well-known arrhythmogenic mechanism which results from dynamical instabilities. The propensity to alternans is classically investigated by examining APD restitution and by deriving APD restitution slopes as predictive markers. However, experiments have shown that such markers are not always accurate for the prediction of alternans. Using a mathematical ventricular cell model known to exhibit unstable dynamics of both membrane potential and Ca2+ cycling, we demonstrate that an accurate marker can be obtained by pacing at cycle lengths (CLs) varying randomly around a basic CL (BCL) and by evaluating the transfer function between the time series of CLs and APDs using an autoregressive-moving-average (ARMA) model. The first pole of this transfer function corresponds to the eigenvalue (λalt) of the dominant eigenmode of the cardiac system, which predicts that alternans occurs when λalt≤−1. For different BCLs, control values of λalt were obtained using eigenmode analysis and compared to the first pole of the transfer function estimated using ARMA model fitting in simulations of random pacing protocols. In all versions of the cell model, this pole provided an accurate estimation of λalt. Furthermore, during slow ramp decreases of BCL or simulated drug application, this approach predicted the onset of alternans by extrapolating the time course of the estimated λalt. In conclusion, stochastic pacing and ARMA model identification represents a novel approach to predict alternans without making any assumptions about its ionic mechanisms. It should therefore be applicable experimentally for any type of myocardial cell.
Resumo:
Open web steel joists are designed in the United States following the governing specification published by the Steel Joist Institute. For compression members in joists, this specification employs an effective length factor, or K-factor, in confirming their adequacy. In most cases, these K-factors have been conservatively assumed equal to 1.0 for compression web members, regardless of the fact that intuition and limited experimental work indicate that smaller values could be justified. Given that smaller K-factors could result in more economical designs without a loss in safety, the research presented in this thesis aims to suggest procedures for obtaining more rational values. Three different methods for computing in-plane and out-of-plane K-factors are investigated, including (1) a hand calculation method based on the use of alignment charts, (2) computational critical load (eigenvalue) analyses using uniformly distributed loads, and (3) computational analyses using a compressive strain approach. The latter method is novel and allows for computing the individual buckling load of a specific member within a system, such as a joist. Four different joist configurations are investigated, including an 18K3, 28K10, and two variations of a 32LH06. Based on these methods and the very limited number of joists studied, it appears promising that in-plane and out-of-plane K-factors of 0.75 and 0.85, respectively, could be used in computing the flexural buckling strength of web members in routine steel joist design. Recommendations for future work, which include systematically investigating a wider range of joist configurations and connection restraint, are provided.
Resumo:
A systematic analysis of New Physics impacts on the rare decays KL→π0ell+ell- is performed. Thanks to their different sensitivities to flavor-changing local effective interactions, these two modes could provide valuable information on the nature of the possible New Physics at play. In particular, a combined measurement of both modes could disentangle scalar/pseudoscalar from vector or axial-vector contributions. For the latter, model-independent bounds are derived. Finally, the KL→π0μ+μ- forward-backward CP-asymmetry is considered, and shown to give interesting complementary information.
Resumo:
Outcome-dependent, two-phase sampling designs can dramatically reduce the costs of observational studies by judicious selection of the most informative subjects for purposes of detailed covariate measurement. Here we derive asymptotic information bounds and the form of the efficient score and influence functions for the semiparametric regression models studied by Lawless, Kalbfleisch, and Wild (1999) under two-phase sampling designs. We show that the maximum likelihood estimators for both the parametric and nonparametric parts of the model are asymptotically normal and efficient. The efficient influence function for the parametric part aggress with the more general information bound calculations of Robins, Hsieh, and Newey (1995). By verifying the conditions of Murphy and Van der Vaart (2000) for a least favorable parametric submodel, we provide asymptotic justification for statistical inference based on profile likelihood.
Resumo:
We analyze three sets of doubly-censored cohort data on incubation times, estimating incubation distributions using semi-parametric methods and assessing the comparability of the estimates. Weibull models appear to be inappropriate for at least one of the cohorts, and the estimates for the different cohorts are substantially different. We use these estimates as inputs for backcalculation, using a nonparametric method based on maximum penalized likelihood. The different incubations all produce fits to the reported AIDS counts that are as good as the fit from a nonstationary incubation distribution that models treatment effects, but the estimated infection curves are very different. We also develop a method for estimating nonstationarity as part of the backcalculation procedure and find that such estimates also depend very heavily on the assumed incubation distribution. We conclude that incubation distributions are so uncertain that meaningful error bounds are difficult to place on backcalculated estimates and that backcalculation may be too unreliable to be used without being supplemented by other sources of information in HIV prevalence and incidence.
Resumo:
Equivalence testing is growing in use in scientific research outside of its traditional role in the drug approval process. Largely due to its ease of use and recommendation from the United States Food and Drug Administration guidance, the most common statistical method for testing (bio)equivalence is the two one-sided tests procedure (TOST). Like classical point-null hypothesis testing, TOST is subject to multiplicity concerns as more comparisons are made. In this manuscript, a condition that bounds the family-wise error rate (FWER) using TOST is given. This condition then leads to a simple solution for controlling the FWER. Specifically, we demonstrate that if all pairwise comparisons of k independent groups are being evaluated for equivalence, then simply scaling the nominal Type I error rate down by (k - 1) is sufficient to maintain the family-wise error rate at the desired value or less. The resulting rule is much less conservative than the equally simple Bonferroni correction. An example of equivalence testing in a non drug-development setting is given.
Resumo:
The degree of polarization of a refected field from active laser illumination can be used for object identifcation and classifcation. The goal of this study is to investigate methods for estimating the degree of polarization for refected fields with active laser illumination, which involves the measurement and processing of two orthogonal field components (complex amplitudes), two orthogonal intensity components, and the total field intensity. We propose to replace interferometric optical apparatuses with a computational approach for estimating the degree of polarization from two orthogonal intensity data and total intensity data. Cramer-Rao bounds for each of the three sensing modalities with various noise models are computed. Algebraic estimators and maximum-likelihood (ML) estimators are proposed. Active-set algorithm and expectation-maximization (EM) algorithm are used to compute ML estimates. The performances of the estimators are compared with each other and with their corresponding Cramer-Rao bounds. Estimators for four-channel polarimeter (intensity interferometer) sensing have a better performance than orthogonal intensities estimators and total intensity estimators. Processing the four intensities data from polarimeter, however, requires complicated optical devices, alignment, and four CCD detectors. It only requires one or two detectors and a computer to process orthogonal intensities data and total intensity data, and the bounds and estimator performances demonstrate that reasonable estimates may still be obtained from orthogonal intensities or total intensity data. Computational sensing is a promising way to estimate the degree of polarization.
Resumo:
Chapter 1 is used to introduce the basic tools and mechanics used within this thesis. Most of the definitions used in the thesis will be defined, and we provide a basic survey of topics in graph theory and design theory pertinent to the topics studied in this thesis. In Chapter 2, we are concerned with the study of fixed block configuration group divisible designs, GDD(n; m; k; λ1; λ2). We study those GDDs in which each block has configuration (s; t), that is, GDDs in which each block has exactly s points from one of the two groups and t points from the other. Chapter 2 begins with an overview of previous results and constructions for small group size and block sizes 3, 4 and 5. Chapter 2 is largely devoted to presenting constructions and results about GDDs with two groups and block size 6. We show the necessary conditions are sufficient for the existence of GDD(n, 2, 6; λ1, λ2) with fixed block configuration (3; 3). For configuration (1; 5), we give minimal or nearminimal index constructions for all group sizes n ≥ 5 except n = 10, 15, 160, or 190. For configuration (2, 4), we provide constructions for several families ofGDD(n, 2, 6; λ1, λ2)s. Chapter 3 addresses characterizing (3, r)-regular graphs. We begin with providing previous results on the well studied class of (2, r)-regular graphs and some results on the structure of large (t; r)-regular graphs. In Chapter 3, we completely characterize all (3, 1)-regular and (3, 2)-regular graphs, as well has sharpen existing bounds on the order of large (3, r)- regular graphs of a certain form for r ≥ 3. Finally, the appendix gives computational data resulting from Sage and C programs used to generate (3, 3)-regular graphs on less than 10 vertices.
Resumo:
Small clusters of gallium oxide, technologically important high temperature ceramic, together with interaction of nucleic acid bases with graphene and small-diameter carbon nanotube are focus of first principles calculations in this work. A high performance parallel computing platform is also developed to perform these calculations at Michigan Tech. First principles calculations are based on density functional theory employing either local density or gradient-corrected approximation together with plane wave and gaussian basis sets. The bulk Ga2O3 is known to be a very good candidate for fabricating electronic devices that operate at high temperatures. To explore the properties of Ga2O3 at nonoscale, we have performed a systematic theoretical study on the small polyatomic gallium oxide clusters. The calculated results find that all lowest energy isomers of GamOn clusters are dominated by the Ga-O bonds over the metal-metal or the oxygen-oxygen bonds. Analysis of atomic charges suggest the clusters to be highly ionic similar to the case of bulk Ga2O3. In the study of sequential oxidation of these slusters starting from Ga2O, it is found that the most stable isomers display up to four different backbones of constituent atoms. Furthermore, the predicted configuration of the ground state of Ga2O is recently confirmed by the experimental result of Neumark's group. Guided by the results of calculations the study of gallium oxide clusters, performance related challenge of computational simulations, of producing high performance computers/platforms, has been addressed. Several engineering aspects were thoroughly studied during the design, development and implementation of the high performance parallel computing platform, rama, at Michigan Tech. In an attempt to stay true to the principles of Beowulf revolutioni, the rama cluster was extensively customized to make it easy to understand, and use - for administrators as well as end-users. Following the results of benchmark calculations and to keep up with the complexity of systems under study, rama has been expanded to a total of sixty four processors. Interest in the non-covalent intereaction of DNA with carbon nanotubes has steadily increased during past several years. This hybrid system, at the junction of the biological regime and the nanomaterials world, possesses features which make it very attractive for a wide range of applicatioins. Using the in-house computational power available, we have studied details of the interaction between nucleic acid bases with graphene sheet as well as high-curvature small-diameter carbon nanotube. The calculated trend in the binding energies strongly suggests that the polarizability of the base molecules determines the interaction strength of the nucleic acid bases with graphene. When comparing the results obtained here for physisorption on the small diameter nanotube considered with those from the study on graphene, it is observed that the interaction strength of nucleic acid bases is smaller for the tube. Thus, these results show that the effect of introducing curvature is to reduce the binding energy. The binding energies for the two extreme cases of negligible curvature (i.e. flat graphene sheet) and of very high curvature (i.e. small diameter nanotube) may be considered as upper and lower bounds. This finding represents an important step towards a better understanding of experimentally observed sequence-dependent interaction of DNA with Carbon nanotubes.
Resumo:
An important problem in unsupervised data clustering is how to determine the number of clusters. Here we investigate how this can be achieved in an automated way by using interrelation matrices of multivariate time series. Two nonparametric and purely data driven algorithms are expounded and compared. The first exploits the eigenvalue spectra of surrogate data, while the second employs the eigenvector components of the interrelation matrix. Compared to the first algorithm, the second approach is computationally faster and not limited to linear interrelation measures.
Resumo:
In the modern aspect of powder metallurgy, the first use of a sintering process was in making filaments for incandescent electric lamps.In the short while from the day of Edison to the present, the science of working with metal powders has advanced by leaps and bounds.