24 resultados para COLUMNS
Resumo:
We consider the joint visualization of two matrices which have common rowsand columns, for example multivariate data observed at two time pointsor split accord-ing to a dichotomous variable. Methods of interest includeprincipal components analysis for interval-scaled data, or correspondenceanalysis for frequency data or ratio-scaled variables on commensuratescales. A simple result in matrix algebra shows that by setting up thematrices in a particular block format, matrix sum and difference componentscan be visualized. The case when we have more than two matrices is alsodiscussed and the methodology is applied to data from the InternationalSocial Survey Program.
Resumo:
The network choice revenue management problem models customers as choosing from an offer-set, andthe firm decides the best subset to offer at any given moment to maximize expected revenue. The resultingdynamic program for the firm is intractable and approximated by a deterministic linear programcalled the CDLP which has an exponential number of columns. However, under the choice-set paradigmwhen the segment consideration sets overlap, the CDLP is difficult to solve. Column generation has beenproposed but finding an entering column has been shown to be NP-hard. In this paper, starting with aconcave program formulation based on segment-level consideration sets called SDCP, we add a class ofconstraints called product constraints, that project onto subsets of intersections. In addition we proposea natural direct tightening of the SDCP called ?SDCP, and compare the performance of both methodson the benchmark data sets in the literature. Both the product constraints and the ?SDCP method arevery simple and easy to implement and are applicable to the case of overlapping segment considerationsets. In our computational testing on the benchmark data sets in the literature, SDCP with productconstraints achieves the CDLP value at a fraction of the CPU time taken by column generation and webelieve is a very promising approach for quickly approximating CDLP when segment consideration setsoverlap and the consideration sets themselves are relatively small.
Resumo:
The case of two transition tables is considered, that is two squareasymmetric matrices of frequencies where the rows and columns of thematrices are the same objects observed at three different timepoints. Different ways of visualizing the tables, either separatelyor jointly, are examined. We generalize an existing idea where asquare matrix is descomposed into symmetric and skew-symmetric partsto two matrices, leading to a decomposition into four components: (1)average symmetric, (2) average skew-symmetric, (3) symmetricdifference from average, and (4) skew-symmetric difference fromaverage. The method is illustrated with an artificial example and anexample using real data from a study of changing values over threegenerations.
Resumo:
We construct a weighted Euclidean distance that approximates any distance or dissimilarity measure between individuals that is based on a rectangular cases-by-variables data matrix. In contrast to regular multidimensional scaling methods for dissimilarity data, the method leads to biplots of individuals and variables while preserving all the good properties of dimension-reduction methods that are based on the singular-value decomposition. The main benefits are the decomposition of variance into components along principal axes, which provide the numerical diagnostics known as contributions, and the estimation of nonnegative weights for each variable. The idea is inspired by the distance functions used in correspondence analysis and in principal component analysis of standardized data, where the normalizations inherent in the distances can be considered as differential weighting of the variables. In weighted Euclidean biplots we allow these weights to be unknown parameters, which are estimated from the data to maximize the fit to the chosen distances or dissimilarities. These weights are estimated using a majorization algorithm. Once this extra weight-estimation step is accomplished, the procedure follows the classical path in decomposing the matrix and displaying its rows and columns in biplots.
Resumo:
This case study deals with a rock face monitoring in urban areas using a Terrestrial Laser Scanner. The pilot study area is an almost vertical, fifty meter high cliff, on top of which the village of Castellfollit de la Roca is located. Rockfall activity is currently causing a retreat of the rock face, which may endanger the houses located at its edge. TLS datasets consist of high density 3-D point clouds acquired from five stations, nine times in a time span of 22 months (from March 2006 to January 2008). The change detection, i.e. rockfalls, was performed through a sequential comparison of datasets. Two types of mass movement were detected in the monitoring period: (a) detachment of single basaltic columns, with magnitudes below 1.5 m3 and (b) detachment of groups of columns, with magnitudes of 1.5 to 150 m3. Furthermore, the historical record revealed (c) the occurrence of slab failures with magnitudes higher than 150 m3. Displacements of a likely slab failure were measured, suggesting an apparent stationary stage. Even failures are clearly episodic, our results, together with the study of the historical record, enabled us to estimate a mean detachment of material from 46 to 91.5 m3 year¿1. The application of TLS considerably improved our understanding of rockfall phenomena in the study area.
Resumo:
This case study deals with a rock face monitoring in urban areas using a Terrestrial Laser Scanner. The pilot study area is an almost vertical, fifty meter high cliff, on top of which the village of Castellfollit de la Roca is located. Rockfall activity is currently causing a retreat of the rock face, which may endanger the houses located at its edge. TLS datasets consist of high density 3-D point clouds acquired from five stations, nine times in a time span of 22 months (from March 2006 to January 2008). The change detection, i.e. rockfalls, was performed through a sequential comparison of datasets. Two types of mass movement were detected in the monitoring period: (a) detachment of single basaltic columns, with magnitudes below 1.5 m3 and (b) detachment of groups of columns, with magnitudes of 1.5 to 150 m3. Furthermore, the historical record revealed (c) the occurrence of slab failures with magnitudes higher than 150 m3. Displacements of a likely slab failure were measured, suggesting an apparent stationary stage. Even failures are clearly episodic, our results, together with the study of the historical record, enabled us to estimate a mean detachment of material from 46 to 91.5 m3 year¿1. The application of TLS considerably improved our understanding of rockfall phenomena in the study area.
Resumo:
The impact, on nitrogen and phosphorous dynamics, of applying compost at different rates was investigated in soils developed on schist in new terraced vineyards (NTV) and in undisturbed areas (NC). Repacked soil columns amended with 0 (control), 50 t ha –1 (T1) and 100 t ha–1 (T2) of compost were studied under laboratory conditions simulating both situations. The columns were maintained for 1 year, during which time a total of 300 mm of simulated rainfall was applied in ten 30 mm applications. Soil organic matter (OM), nitrogen and phosphorous contents were analysed at the end of the study period and leachates were analysed after each simulated rainfall event. Significant differences in nitrate leaching were observed between the control and the treated soils and these differences were greater in the NC (control = 1.368 g, T1 = 1.526 g and T2 = 1.686 g) than in the NTV soils (control = 0.61 g, T1 = = 1.068 g and T2 = 1.283 g). The relative effect was greater in the NTV soils (T1/control = 1.11 vs. 1.75 and T2/control = 1.23 vs. 2.1 for NC and NTV, respectively). The nitrate concentration in the leached water reached up to 400 mg L–1, which implied a risk of groundwater pollution. Phosphorous losses through leaching were very low with concentrations of < 0.15 mg L–1, without any significant differences between treatments. The phosphorous concentrations in the surface horizon increased by 50.8% in T1 and by 66.8% in T2 in the NC soils, compared with increases of 20.3% and 38%, respectively, in the NTV soils. Due to the high infiltration capacity of the study soils, leaching effects must be considered in order to prevent groundwater pollution.
Resumo:
Sudoku problems are some of the most known and enjoyed pastimes, with a never diminishing popularity, but, for the last few years those problems have gone from an entertainment to an interesting research area, a twofold interesting area, in fact. On the one side Sudoku problems, being a variant of Gerechte Designs and Latin Squares, are being actively used for experimental design, as in [8, 44, 39, 9]. On the other hand, Sudoku problems, as simple as they seem, are really hard structured combinatorial search problems, and thanks to their characteristics and behavior, they can be used as benchmark problems for refining and testing solving algorithms and approaches. Also, thanks to their high inner structure, their study can contribute more than studies of random problems to our goal of solving real-world problems and applications and understanding problem characteristics that make them hard to solve. In this work we use two techniques for solving and modeling Sudoku problems, namely, Constraint Satisfaction Problem (CSP) and Satisfiability Problem (SAT) approaches. To this effect we define the Generalized Sudoku Problem (GSP), where regions can be of rectangular shape, problems can be of any order, and solution existence is not guaranteed. With respect to the worst-case complexity, we prove that GSP with block regions of m rows and n columns with m = n is NP-complete. For studying the empirical hardness of GSP, we define a series of instance generators, that differ in the balancing level they guarantee between the constraints of the problem, by finely controlling how the holes are distributed in the cells of the GSP. Experimentally, we show that the more balanced are the constraints, the higher the complexity of solving the GSP instances, and that GSP is harder than the Quasigroup Completion Problem (QCP), a problem generalized by GSP. Finally, we provide a study of the correlation between backbone variables – variables with the same value in all the solutions of an instance– and hardness of GSP.
Resumo:
Hydrogenated nanocrystalline silicon (nc-Si:H) obtained by hot-wire chemical vapour deposition (HWCVD) at low substrate temperature (150 °C) has been incorporated as the active layer in bottom-gate thin-film transistors (TFTs). These devices were electrically characterised by measuring in vacuum the output and transfer characteristics for different temperatures. The field-effect mobility showed a thermally activated behaviour which could be attributed to carrier trapping at the band tails, as in hydrogenated amorphous silicon (a-Si:H), and potential barriers for the electronic transport. Trapped charge at the interfaces of the columns, which are typical in nc-Si:H, would account for these barriers. By using the Levinson technique, the quality of the material at the column boundaries could be studied. Finally, these results were interpreted according to the particular microstructure of nc-Si:H.