973 resultados para matrix function approximation
Resumo:
This dissertation aims to improve the performance of existing assignment-based dynamic origin-destination (O-D) matrix estimation models to successfully apply Intelligent Transportation Systems (ITS) strategies for the purposes of traffic congestion relief and dynamic traffic assignment (DTA) in transportation network modeling. The methodology framework has two advantages over the existing assignment-based dynamic O-D matrix estimation models. First, it combines an initial O-D estimation model into the estimation process to provide a high confidence level of initial input for the dynamic O-D estimation model, which has the potential to improve the final estimation results and reduce the associated computation time. Second, the proposed methodology framework can automatically convert traffic volume deviation to traffic density deviation in the objective function under congested traffic conditions. Traffic density is a better indicator for traffic demand than traffic volume under congested traffic condition, thus the conversion can contribute to improving the estimation performance. The proposed method indicates a better performance than a typical assignment-based estimation model (Zhou et al., 2003) in several case studies. In the case study for I-95 in Miami-Dade County, Florida, the proposed method produces a good result in seven iterations, with a root mean square percentage error (RMSPE) of 0.010 for traffic volume and a RMSPE of 0.283 for speed. In contrast, Zhou's model requires 50 iterations to obtain a RMSPE of 0.023 for volume and a RMSPE of 0.285 for speed. In the case study for Jacksonville, Florida, the proposed method reaches a convergent solution in 16 iterations with a RMSPE of 0.045 for volume and a RMSPE of 0.110 for speed, while Zhou's model needs 10 iterations to obtain the best solution, with a RMSPE of 0.168 for volume and a RMSPE of 0.179 for speed. The successful application of the proposed methodology framework to real road networks demonstrates its ability to provide results both with satisfactory accuracy and within a reasonable time, thus establishing its potential usefulness to support dynamic traffic assignment modeling, ITS systems, and other strategies.
Resumo:
Optical potentials provide critical input for calculations on a wide variety of nuclear reactions, in particular, for neutrino-nucleus reactions, which are of great interest in the light of the new neutrino oscillation experiments. We present the global relativistic folding optical potential (GRFOP) fits to elastic proton scattering data from C-12 nucleus at energies between 20 and 1040 MeV. We estimate observables, such as the differential cross section, the analyzing power, and the spin rotation parameter, in elastic proton scattering within the relativistic impulse approximation. The new GRFOP potential is employed within the relativistic Green's function model for inclusive quasielastic electron scattering and for (anti) neutrino-nucleus scattering at MiniBooNE kinematics.
Resumo:
The paper develops a novel realized matrix-exponential stochastic volatility model of multivariate returns and realized covariances that incorporates asymmetry and long memory (hereafter the RMESV-ALM model). The matrix exponential transformation guarantees the positivedefiniteness of the dynamic covariance matrix. The contribution of the paper ties in with Robert Basmann’s seminal work in terms of the estimation of highly non-linear model specifications (“Causality tests and observationally equivalent representations of econometric models”, Journal of Econometrics, 1988, 39(1-2), 69–104), especially for developing tests for leverage and spillover effects in the covariance dynamics. Efficient importance sampling is used to maximize the likelihood function of RMESV-ALM, and the finite sample properties of the quasi-maximum likelihood estimator of the parameters are analysed. Using high frequency data for three US financial assets, the new model is estimated and evaluated. The forecasting performance of the new model is compared with a novel dynamic realized matrix-exponential conditional covariance model. The volatility and co-volatility spillovers are examined via the news impact curves and the impulse response functions from returns to volatility and co-volatility.
Resumo:
Acknowledgements Funding: Chest, Heart and Stroke Scotland, grant ref. R13/A148. The funder had no role in study design, data collection, analysis and interpretation, writing of the manuscript, and in the decision to submit the manuscript for publication. All authors had full access to all the data in the study. The corresponding author had final responsibility for the decision to submit for publication.
Resumo:
Subspaces and manifolds are two powerful models for high dimensional signals. Subspaces model linear correlation and are a good fit to signals generated by physical systems, such as frontal images of human faces and multiple sources impinging at an antenna array. Manifolds model sources that are not linearly correlated, but where signals are determined by a small number of parameters. Examples are images of human faces under different poses or expressions, and handwritten digits with varying styles. However, there will always be some degree of model mismatch between the subspace or manifold model and the true statistics of the source. This dissertation exploits subspace and manifold models as prior information in various signal processing and machine learning tasks.
A near-low-rank Gaussian mixture model measures proximity to a union of linear or affine subspaces. This simple model can effectively capture the signal distribution when each class is near a subspace. This dissertation studies how the pairwise geometry between these subspaces affects classification performance. When model mismatch is vanishingly small, the probability of misclassification is determined by the product of the sines of the principal angles between subspaces. When the model mismatch is more significant, the probability of misclassification is determined by the sum of the squares of the sines of the principal angles. Reliability of classification is derived in terms of the distribution of signal energy across principal vectors. Larger principal angles lead to smaller classification error, motivating a linear transform that optimizes principal angles. This linear transformation, termed TRAIT, also preserves some specific features in each class, being complementary to a recently developed Low Rank Transform (LRT). Moreover, when the model mismatch is more significant, TRAIT shows superior performance compared to LRT.
The manifold model enforces a constraint on the freedom of data variation. Learning features that are robust to data variation is very important, especially when the size of the training set is small. A learning machine with large numbers of parameters, e.g., deep neural network, can well describe a very complicated data distribution. However, it is also more likely to be sensitive to small perturbations of the data, and to suffer from suffer from degraded performance when generalizing to unseen (test) data.
From the perspective of complexity of function classes, such a learning machine has a huge capacity (complexity), which tends to overfit. The manifold model provides us with a way of regularizing the learning machine, so as to reduce the generalization error, therefore mitigate overfiting. Two different overfiting-preventing approaches are proposed, one from the perspective of data variation, the other from capacity/complexity control. In the first approach, the learning machine is encouraged to make decisions that vary smoothly for data points in local neighborhoods on the manifold. In the second approach, a graph adjacency matrix is derived for the manifold, and the learned features are encouraged to be aligned with the principal components of this adjacency matrix. Experimental results on benchmark datasets are demonstrated, showing an obvious advantage of the proposed approaches when the training set is small.
Stochastic optimization makes it possible to track a slowly varying subspace underlying streaming data. By approximating local neighborhoods using affine subspaces, a slowly varying manifold can be efficiently tracked as well, even with corrupted and noisy data. The more the local neighborhoods, the better the approximation, but the higher the computational complexity. A multiscale approximation scheme is proposed, where the local approximating subspaces are organized in a tree structure. Splitting and merging of the tree nodes then allows efficient control of the number of neighbourhoods. Deviation (of each datum) from the learned model is estimated, yielding a series of statistics for anomaly detection. This framework extends the classical {\em changepoint detection} technique, which only works for one dimensional signals. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.
Resumo:
Primary vesicoureteral reflux (VUR) is a common pediatric condition due to a developmental defect in the ureterovesical junction. The prevalence of VUR among individuals with connective tissue disorders, as well as the importance of the ureter and bladder wall musculature for the anti-reflux mechanism, suggest that defects in the extracellular matrix (ECM) within the ureterovesical junction may result in VUR. This review will discuss the function of the smooth muscle and its supporting ECM microenvironment with respect to VUR, and explore the association of VUR with mutations in ECM-related genes.
Resumo:
Fibrosis of any tissue is characterized by excessive extracellular matrix accumulation that ultimately destroys tissue architecture and eventually abolishes normal organ function. Although much research has focused on the mechanisms underlying disease pathogenesis, there are still no effective antifibrotic therapies that can reverse, stop or delay the formation of scar tissue in most fibrotic organs. As fibrosis can be described as an aberrant wound healing response, a recent hypothesis suggests that the cells involved in this process gain an altered heritable phenotype that promotes excessive fibrotic tissue accumulation. This article will review the most recent observations in a newly emerging field that links epigenetic modifications to the pathogenesis of fibrosis. Specifically, the roles of DNA methylation and histone modifications in fibrotic disease will be discussed.
Resumo:
We develop a framework for proving approximation limits of polynomial size linear programs (LPs) from lower bounds on the nonnegative ranks of suitably defined matrices. This framework yields unconditional impossibility results that are applicable to any LP as opposed to only programs generated by hierarchies. Using our framework, we prove that O(n1/2-ε)-approximations for CLIQUE require LPs of size 2nΩ(ε). This lower bound applies to LPs using a certain encoding of CLIQUE as a linear optimization problem. Moreover, we establish a similar result for approximations of semidefinite programs by LPs. Our main technical ingredient is a quantitative improvement of Razborov's [38] rectangle corruption lemma for the high error regime, which gives strong lower bounds on the nonnegative rank of shifts of the unique disjointness matrix.
Resumo:
We present a detailed analysis of the application of a multi-scale Hierarchical Reconstruction method for solving a family of ill-posed linear inverse problems. When the observations on the unknown quantity of interest and the observation operators are known, these inverse problems are concerned with the recovery of the unknown from its observations. Although the observation operators we consider are linear, they are inevitably ill-posed in various ways. We recall in this context the classical Tikhonov regularization method with a stabilizing function which targets the specific ill-posedness from the observation operators and preserves desired features of the unknown. Having studied the mechanism of the Tikhonov regularization, we propose a multi-scale generalization to the Tikhonov regularization method, so-called the Hierarchical Reconstruction (HR) method. First introduction of the HR method can be traced back to the Hierarchical Decomposition method in Image Processing. The HR method successively extracts information from the previous hierarchical residual to the current hierarchical term at a finer hierarchical scale. As the sum of all the hierarchical terms, the hierarchical sum from the HR method provides an reasonable approximate solution to the unknown, when the observation matrix satisfies certain conditions with specific stabilizing functions. When compared to the Tikhonov regularization method on solving the same inverse problems, the HR method is shown to be able to decrease the total number of iterations, reduce the approximation error, and offer self control of the approximation distance between the hierarchical sum and the unknown, thanks to using a ladder of finitely many hierarchical scales. We report numerical experiments supporting our claims on these advantages the HR method has over the Tikhonov regularization method.
Resumo:
The trees, hedgerows and woods are current configuration of the tree network in several ecological regions of the world. In Trás–os–Montes region, Northeast of Portugal, they are a traditional component of Terra fria landscape and they could be seen in several forms: scatter trees, fencerows, small woodlots, riparian buffer strips, among others. The extensive livestock systems in this region are based on a set of circuits across the landscape. In this practice, flocks interacts with these structures using them for different functions inducing an influence on the itineraries. Our purpose will be focused on the woody features of landscape regarding their configurations, abundance and spacial distribution; in order to examine how the grazing systems depends on the currency of these formations; particularly how species flocks behaviors are related on. Depending on spatial data, The investigation attain to compare the tree network within the agriculture matrix, to the grazed territory crossed by flocks. From the other side, the importance of spatial data on interpreting the issue by suggesting different parameter that may influence the circuits. The recognition of the pressure exerciced by the occurence of the woody structures on the grazed circuits is possible. We believe that the role of these woody structures features in supporting the tradicional silvopastoral systems has been sufficiently strong for change their distribution pattern.
Resumo:
In this work we isolated a novel crotamine like protein from the Crotalus durissus cascavella venom by combination of molecular exclusion and analytical reverse phase HPLC. Its primary structure was:YKRCHKKGGHCFPKEKICLPPSSDLGKMDCRWKRK-CCKKGS GK. This protein showed a molecular mass of 4892.89 da that was determined by Matrix Assisted Laser Desorption Ionization Time-of-flight (MALDI-TOF) mass spectrometry. The approximately pI value of this protein was determined in 9.9 by two-dimensional electrophoresis. This crotamine-like protein isolated here and that named as Cro 2 produced skeletal muscle spasm and spastic paralysis in mice similarly to other crotamines like proteins. Cro 2 did not modify the insulin secretion at low glucose concentration (2.8 and 5.6 mM), but at high glucose concentration (16.7 mM) we observed an insulin secretion increasing of 2.7-3.0-fold than to control. The Na+ channel antagonist tetrodoxin (6 mM) decreased glucose and Cro 2-induced insulin secretion. These results suggested that Na+ channel are involved in the insulin secretion. In this article, we also purified some peptide fragment from the treatment of reduced and carboxymethylated Cro 2 (RC-Cro 2) with cyanogen bromide and protease V8 from Staphylococcus aureus. The isolated pancreatic beta-cells were then treated with peptides only at high glucose concentration (16.7 mM), in this condition only two peptides induced insulin secretion. The amino acid sequence homology analysis of the whole crotamine as well as the biologically-active peptide allowed determining the consensus region of the biologically-active crotamine responsible for insulin secretion was KGGHCFPKE and DCRWKWKCCKKGSG.
Resumo:
Global land cover maps play an important role in the understanding of the Earth's ecosystem dynamic. Several global land cover maps have been produced recently namely, Global Land Cover Share (GLC-Share) and GlobeLand30. These datasets are very useful sources of land cover information and potential users and producers are many times interested in comparing these datasets. However these global land cover maps are produced based on different techniques and using different classification schemes making their interoperability in a standardized way a challenge. The Environmental Information and Observation Network (EIONET) Action Group on Land Monitoring in Europe (EAGLE) concept was developed in order to translate the differences in the classification schemes into a standardized format which allows a comparison between class definitions. This is done by elaborating an EAGLE matrix for each classification scheme, where a bar code is assigned to each class definition that compose a certain land cover class. Ahlqvist (2005) developed an overlap metric to cope with semantic uncertainty of geographical concepts, providing this way a measure of how geographical concepts are more related to each other. In this paper, the comparison of global land cover datasets is done by translating each land cover legend into the EAGLE bar coding for the Land Cover Components of the EAGLE matrix. The bar coding values assigned to each class definition are transformed in a fuzzy function that is used to compute the overlap metric proposed by Ahlqvist (2005) and overlap matrices between land cover legends are elaborated. The overlap matrices allow the semantic comparison between the classification schemes of each global land cover map. The proposed methodology is tested on a case study where the overlap metric proposed by Ahlqvist (2005) is computed in the comparison of two global land cover maps for Continental Portugal. The study resulted with the overlap spatial distribution among the two global land cover maps, Globeland30 and GLC-Share. These results shows that Globeland30 product overlap with a degree of 77% with GLC-Share product in Continental Portugal.
Resumo:
Virtually every sector of business and industry that uses computing, including financial analysis, search engines, and electronic commerce, incorporate Big Data analysis into their business model. Sophisticated clustering algorithms are popular for deducing the nature of data by assigning labels to unlabeled data. We address two main challenges in Big Data. First, by definition, the volume of Big Data is too large to be loaded into a computer’s memory (this volume changes based on the computer used or available, but there is always a data set that is too large for any computer). Second, in real-time applications, the velocity of new incoming data prevents historical data from being stored and future data from being accessed. Therefore, we propose our Streaming Kernel Fuzzy c-Means (stKFCM) algorithm, which reduces both computational complexity and space complexity significantly. The proposed stKFCM only requires O(n2) memory where n is the (predetermined) size of a data subset (or data chunk) at each time step, which makes this algorithm truly scalable (as n can be chosen based on the available memory). Furthermore, only 2n2 elements of the full N × N (where N >> n) kernel matrix need to be calculated at each time-step, thus reducing both the computation time in producing the kernel elements and also the complexity of the FCM algorithm. Empirical results show that stKFCM, even with relatively very small n, can provide clustering performance as accurately as kernel fuzzy c-means run on the entire data set while achieving a significant speedup.
Resumo:
Purpose: RPE lysosomal dysfunction is a major contributor to AMD pathogenesis. Controlled activity of a major class of RPE proteinases, the cathepsins, is crucial in maintaining correct lysosomal function. Advanced glycation end-products (AGEs) accumulate in the Bruch’s membrane (BM) with age, impacting critical RPE functions and in turn, contributing to the development of AMD. The aim of this study was to assess the effect of AGEs on lysosomal function by analysing the expression, processing and activity of the cysteine proteinases cathepsins B, L and S, and the aspartic proteinase cathepsin D. Methods: ARPE-19 cells were cultured on AGE-containing BM mimics (matrigel) for 14 days and compared to untreated substrate. Expression levels and intracellular processing of cathepsins B, D, L and S, were assessed by qPCR and immunoblotting of cell lysates. Lysosomal activity was investigated using multiple activity assays specific to each of the analysed cathepsins. Statistical analysis was performed using the Student’s independent T-test. Results: AGE exposure produced a 36% decrease in cathepsin L activity when compared to non-treated controls (p=0.02, n= 3) although no significant changes were observed in protein expression/processing under these conditions. Both the pro and active forms of cathepsin S decreased by 40% (p=0.04) and 74% (p=0.004), respectively (n=3). In contrast, the active form of the cathepsin D increased by 125% (p=0.005, n= 4). However, no changes were observed in the activity levels of both cathepsins S and D. In addition, cathepsin B expression, processing and activity also remained unaltered following AGE exposure. Conclusions: AGEs accumulation in the extracellular matrix, a phenomenon associated with the natural aging process of the BM, attenuates the expression, intracellular processing and activity of specific lysosomal effectors. Altered enzymatic function may impair important lysosomal processes such as endocytosis, autophagy and phagocytosis of photoreceptor outer segments, each of which may influence the age-related dysfunction of the RPE and subsequently, AMD pathogenesis.
Resumo:
Hypertensive patients exhibit higher cardiovascular risk and reduced lung function compared with the general population. Whether this association stems from the coexistence of two highly prevalent diseases or from direct or indirect links of pathophysiological mechanisms is presently unclear. This study investigated the association between lung function and carotid features in non-smoking hypertensive subjects with supposed normal lung function. Hypertensive patients (n = 67) were cross-sectionally evaluated by clinical, hemodynamic, laboratory, and carotid ultrasound analysis. Forced vital capacity, forced expired volume in 1 second and in 6 seconds, and lung age were estimated by spirometry. Subjects with ventilatory abnormalities according to current guidelines were excluded. Regression analysis adjusted for age and prior smoking history showed that lung age and the percentage of predicted spirometric parameters associated with common carotid intima-media thickness, diameter, and stiffness. Further analyses, adjusted for additional potential confounders, revealed that lung age was the spirometric parameter exhibiting the most significant regression coefficients with carotid features. Conversely, plasma C-reactive protein and matrix-metalloproteinases-2/9 levels did not influence this relationship. The present findings point toward lung age as a potential marker of vascular remodeling and indicate that lung and vascular remodeling might share common pathophysiological mechanisms in hypertensive subjects.