996 resultados para Distance Matrix
Resumo:
Matrix pore water in the connected inter- and intragranular pore space of low-permeable crystalline bedrock interacts with flowing fracture groundwater predominately by diffusion. Based on the slow exchange between the two water reservoirs, matrix pore water acts as an archive of past changes in fracture groundwater compositions and thus of the palaeohydrological history of a site. Matrix pore water of crystalline bedrock from the Olkiluoto investigation site (SW Finland) was characterised using the stable water isotopes (δ18O, δ2H), combined with the concentrations of dissolved chloride and bromide as natural tracers. The comparison of tracer concentrations in pore water and present-day fracture groundwater suggest for the pore water the presence of old, dilute meteoric water components that infiltrated into the fractures during various warm climate stages. These different meteoric components can be discerned based on the diffusion distance between the two reservoirs and be brought into context with the palaeohydrological evolution of the site.
Resumo:
As additivity is a very useful property for a distance measure, a general additive distance is proposed under the stationary time-reversible (SR) model of nucleotide substitution or, more generally, under the stationary, time-reversible, and rate variable (SRV) model, which allows rate variation among nucleotide sites. A method for estimating the mean distance and the sampling variance is developed. In addition, a method is developed for estimating the variance-covariance matrix of distances, which is useful for the statistical test of phylogenies and molecular clocks. Computer simulation shows (i) if the sequences are longer than, say, 1000 bp, the SR method is preferable to simpler methods; (ii) the SR method is robust against deviations from time-reversibility; (iii) when the rate varies among sites, the SRV method is much better than the SR method because the distance is seriously underestimated by the SR method; and (iv) our method for estimating the sampling variance is accurate for sequences longer than 500 bp. Finally, a test is constructed for testing whether DNA evolution follows a general Markovian model.
Resumo:
The purposes of this study were (1) to validate of the item-attribute matrix using two levels of attributes (Level 1 attributes and Level 2 sub-attributes), and (2) through retrofitting the diagnostic models to the mathematics test of the Trends in International Mathematics and Science Study (TIMSS), to evaluate the construct validity of TIMSS mathematics assessment by comparing the results of two assessment booklets. Item data were extracted from Booklets 2 and 3 for the 8th grade in TIMSS 2007, which included a total of 49 mathematics items and every student's response to every item. The study developed three categories of attributes at two levels: content, cognitive process (TIMSS or new), and comprehensive cognitive process (or IT) based on the TIMSS assessment framework, cognitive procedures, and item type. At level one, there were 4 content attributes (number, algebra, geometry, and data and chance), 3 TIMSS process attributes (knowing, applying, and reasoning), and 4 new process attributes (identifying, computing, judging, and reasoning). At level two, the level 1 attributes were further divided into 32 sub-attributes. There was only one level of IT attributes (multiple steps/responses, complexity, and constructed-response). Twelve Q-matrices (4 originally specified, 4 random, and 4 revised) were investigated with eleven Q-matrix models (QM1 ~ QM11) using multiple regression and the least squares distance method (LSDM). Comprehensive analyses indicated that the proposed Q-matrices explained most of the variance in item difficulty (i.e., 64% to 81%). The cognitive process attributes contributed to the item difficulties more than the content attributes, and the IT attributes contributed much more than both the content and process attributes. The new retrofitted process attributes explained the items better than the TIMSS process attributes. Results generated from the level 1 attributes and the level 2 attributes were consistent. Most attributes could be used to recover students' performance, but some attributes' probabilities showed unreasonable patterns. The analysis approaches could not demonstrate if the same construct validity was supported across booklets. The proposed attributes and Q-matrices explained the items of Booklet 2 better than the items of Booklet 3. The specified Q-matrices explained the items better than the random Q-matrices.
Resumo:
Spectral unmixing (SU) is a technique to characterize mixed pixels of the hyperspectral images measured by remote sensors. Most of the existing spectral unmixing algorithms are developed using the linear mixing models. Since the number of endmembers/materials present at each mixed pixel is normally scanty compared with the number of total endmembers (the dimension of spectral library), the problem becomes sparse. This thesis introduces sparse hyperspectral unmixing methods for the linear mixing model through two different scenarios. In the first scenario, the library of spectral signatures is assumed to be known and the main problem is to find the minimum number of endmembers under a reasonable small approximation error. Mathematically, the corresponding problem is called the $\ell_0$-norm problem which is NP-hard problem. Our main study for the first part of thesis is to find more accurate and reliable approximations of $\ell_0$-norm term and propose sparse unmixing methods via such approximations. The resulting methods are shown considerable improvements to reconstruct the fractional abundances of endmembers in comparison with state-of-the-art methods such as having lower reconstruction errors. In the second part of the thesis, the first scenario (i.e., dictionary-aided semiblind unmixing scheme) will be generalized as the blind unmixing scenario that the library of spectral signatures is also estimated. We apply the nonnegative matrix factorization (NMF) method for proposing new unmixing methods due to its noticeable supports such as considering the nonnegativity constraints of two decomposed matrices. Furthermore, we introduce new cost functions through some statistical and physical features of spectral signatures of materials (SSoM) and hyperspectral pixels such as the collaborative property of hyperspectral pixels and the mathematical representation of the concentrated energy of SSoM for the first few subbands. Finally, we introduce sparse unmixing methods for the blind scenario and evaluate the efficiency of the proposed methods via simulations over synthetic and real hyperspectral data sets. The results illustrate considerable enhancements to estimate the spectral library of materials and their fractional abundances such as smaller values of spectral angle distance (SAD) and abundance angle distance (AAD) as well.
Resumo:
In this report, we survey results on distance magic graphs and some closely related graphs. A distance magic labeling of a graph G with magic constant k is a bijection l from the vertex set to {1, 2, . . . , n}, such that for every vertex x Σ l(y) = k,y∈NG(x) where NG(x) is the set of vertices of G adjacent to x. If the graph G has a distance magic labeling we say that G is a distance magic graph. In Chapter 1, we explore the background of distance magic graphs by introducing examples of magic squares, magic graphs, and distance magic graphs. In Chapter 2, we begin by examining some basic results on distance magic graphs. We next look at results on different graph structures including regular graphs, multipartite graphs, graph products, join graphs, and splitting graphs. We conclude with other perspectives on distance magic graphs including embedding theorems, the matrix representation of distance magic graphs, lifted magic rectangles, and distance magic constants. In Chapter 3, we study graph labelings that retain the same labels as distance magic labelings, but alter the definition in some other way. These labelings include balanced distance magic labelings, closed distance magic labelings, D-distance magic labelings, and distance antimagic labelings. In Chapter 4, we examine results on neighborhood magic labelings, group distance magic labelings, and group distance antimagic labelings. These graph labelings change the label set, but are otherwise similar to distance magic graphs. In Chapter 5, we examine some applications of distance magic and distance antimagic labeling to the fair scheduling of tournaments. In Chapter 6, we conclude with some open problems.
Resumo:
In this work, we study a version of the general question of how well a Haar-distributed orthogonal matrix can be approximated by a random Gaussian matrix. Here, we consider a Gaussian random matrix (Formula presented.) of order n and apply to it the Gram–Schmidt orthonormalization procedure by columns to obtain a Haar-distributed orthogonal matrix (Formula presented.). If (Formula presented.) denotes the vector formed by the first m-coordinates of the ith row of (Formula presented.) and (Formula presented.), our main result shows that the Euclidean norm of (Formula presented.) converges exponentially fast to (Formula presented.), up to negligible terms. To show the extent of this result, we use it to study the convergence of the supremum norm (Formula presented.) and we find a coupling that improves by a factor (Formula presented.) the recently proved best known upper bound on (Formula presented.). Our main result also has applications in Quantum Information Theory.
Resumo:
Ameliorated strategies were put forward to improve the model predictive control in reducing the wind induced vibration of spatial latticed structures. The dynamic matrix control (DMC) predictive method was used and the reference trajectory which is called the decaying functions was suggested for the analysis of spatial latticed structure (SLS) under wind loads. The wind-induced vibration control model of SLS with improved DMC model predictive control was illustrated, then the different feedback strategies were investigated and a typical SLS was taken as example to investigate the reduction of wind-induced vibration. In addition, the robustness and reliability of DMC strategy were discussed by varying the model configurations.
Resumo:
The ideas for this CRC research project are based directly on Sidwell, Kennedy and Chan (2002). That research examined a number of case studies to identify the characteristics of successful projects. The findings were used to construct a matrix of best practice project delivery strategies. The purpose of this literature review is to test the decision matrix against established theory and best practice in the subject of construction project management.
Resumo:
The Co-operative Research Centre for Construction Innovation (CRC-CI) is funding a project known as Value Alignment Process for Project Delivery. The project consists of a study of best practice project delivery and the development of a suite of products, resources and services to guide project teams towards the best procurement approach for a specific project or group of projects. These resources will be focused on promoting the principles that underlie best practice project delivery rather than simply identifying an off-the-shelf procurement system. This project builds on earlier work by Sidwell, Kennedy and Chan (2002), on re-engineering the construction delivery process, which developed a procurement framework in the form of a Decision Matrix
Resumo:
The effective management of bridge stock involves making decisions as to when to repair, remedy, or do nothing, taking into account the financial and service life implications. Such decisions require a reliable diagnosis as to the cause of distress and an understanding of the likely future degradation. Such diagnoses are based on a combination of visual inspections, laboratory tests on samples and expert opinions. In addition, the choice of appropriate laboratory tests requires an understanding of the degradation mechanisms involved. Under these circumstances, the use of expert systems or evaluation tools developed from “realtime” case studies provides a promising solution in the absence of expert knowledge. This paper addresses the issues in bridge infrastructure management in Queensland, Australia. Bridges affected by alkali silica reaction and chloride induced corrosion have been investigated and the results presented using a mind mapping tool. The analysis highights that several levels of rules are required to assess the mechanism causing distress. The systematic development of a rule based approach is presented. An example of this application to a case study bridge has been used to demonstrate that preliminary results are satisfactory.
Resumo:
One of the key issues facing public asset owners is the decision of refurbishing aged built assets. This decision requires an assessment of the “remaining service life” of the key components in a building. The remaining service life is significantly dependent upon the existing condition of the asset and future degradation patterns considering durability and functional obsolescence. Recently developed methods on Residual Service Life modelling, require sophisticated data that are not readily available. Most of the data available are in the form of reports prior to undertaking major repairs or in the form of sessional audit reports. Valuable information from these available sources can serve as bench marks for estimating the reference service life. The authors have acquired similar informations from a public asset building in Melbourne. Using these informations, the residual service life of a case study building façade has been estimated in this paper based on state-of-the-art approaches. These estimations have been evaluated against expert opinion. Though the results are encouraging it is clear that the state-of-the-art methodologies can only provide meaningful estimates provided the level and quality of data are available. This investigation resulted in the development of a new framework for maintenance that integrates the condition assessment procedures and factors influencing residual service life