59 resultados para Magic squares.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Estudi realitzat a partir d’una estada a l’Institut Desenvolupat a School of Comparative American Studies adscrit a la University of Warwick, Regne Unit, entre 2011 i 2012. Aquest projecte analitza en primer lloc la mobilització popular del primer liberalisme i la formació de les primeres organitzacions polítiques liberals que es constituïren a partir de les societats secretes i es propagaren a través dels principals centres de sociabilitat liberal: les societats patriòtiques. En segon lloc mitjançant l’estudi de la mobilitat dels liberals entre l’Espanya metropolitana i el virregnat de Nueva Espanya demostra com es dibuixà un nou model polític basat en el federalisme. El tercer aspecte d’anàlisi és com els exiliats catalans a Anglaterra reberen el suport de la Foreign Bible Society perquè havia mantingut contactes des dels primers anys vint amb l’alt clergat espanyol. El darrer aspecte de la recerca abasta l’estudi de l’espai urbà en relació amb les pràctiques polítiques dels ciutadans a partir de l’anàlisi de la formació i ampliació de les places de la ciutat de Barcelona durant la primera meitat del segle XIX.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present building blocks for algorithms for the efficient reduction of square factor, i.e. direct repetitions in strings. So the basic problem is this: given a string, compute all strings that can be obtained by reducing factors of the form zz to z. Two types of algorithms are treated: an offline algorithm is one that can compute a data structure on the given string in advance before the actual search for the square begins; in contrast, online algorithms receive all input only at the time when a request is made. For offline algorithms we treat the following problem: Let u and w be two strings such that w is obtained from u by reducing a square factor zz to only z. If we further are given the suffix table of u, how can we derive the suffix table for w without computing it from scratch? As the suffix table plays a key role in online algorithms for the detection of squares in a string, this derivation can make the iterated reduction of squares more efficient. On the other hand, we also show how a suffix array, used for the offline detection of squares, can be adapted to the new string resulting from the deletion of a square. Because the deletion is a very local change, this adaption is more eficient than the computation of the new suffix array from scratch.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Structural equation models are widely used in economic, socialand behavioral studies to analyze linear interrelationships amongvariables, some of which may be unobservable or subject to measurementerror. Alternative estimation methods that exploit different distributionalassumptions are now available. The present paper deals with issues ofasymptotic statistical inferences, such as the evaluation of standarderrors of estimates and chi--square goodness--of--fit statistics,in the general context of mean and covariance structures. The emphasisis on drawing correct statistical inferences regardless of thedistribution of the data and the method of estimation employed. A(distribution--free) consistent estimate of $\Gamma$, the matrix ofasymptotic variances of the vector of sample second--order moments,will be used to compute robust standard errors and a robust chi--squaregoodness--of--fit squares. Simple modifications of the usual estimateof $\Gamma$ will also permit correct inferences in the case of multi--stage complex samples. We will also discuss the conditions under which,regardless of the distribution of the data, one can rely on the usual(non--robust) inferential statistics. Finally, a multivariate regressionmodel with errors--in--variables will be used to illustrate, by meansof simulated data, various theoretical aspects of the paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Maximum Capture problem (MAXCAP) is a decision model that addresses the issue of location in a competitive environment. This paper presents a new approach to determine which store s attributes (other than distance) should be included in the newMarket Capture Models and how they ought to be reflected using the Multiplicative Competitive Interaction model. The methodology involves the design and development of a survey; and the application of factor analysis and ordinary least squares. Themethodology has been applied to the supermarket sector in two different scenarios: Milton Keynes (Great Britain) and Barcelona (Spain).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We lay out a model of wage bargaining with two leading features:bargaining is ex post to relevant investments and there isindividual bargaining in firms without a Union. We compareindividual ex post bargaining to coordinated ex post bargainingand we analyze the effects on wage formation. As opposed to exante bargaining models, the costs of destroying the employmentrelationship play a crucial role in determining wages. Highfiring costs in particular yield a rent for employees. Ourtheory points to a employer size-wage effect that is independentof the production function and market power. We derive a simpleleast squares specification from the theoretical model thatallow us to estimate components of the wage premium fromcoordination. We reject the hypothesis that labor coordinationdoes not alter the extensive form of the bargaining game. Laborcoordination substantially increases bargaining power butdecreases labor's ability to pose costly threats to the firm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We construct a weighted Euclidean distance that approximates any distance or dissimilarity measure between individuals that is based on a rectangular cases-by-variables data matrix. In contrast to regular multidimensional scaling methods for dissimilarity data, the method leads to biplots of individuals and variables while preserving all the good properties of dimension-reduction methods that are based on the singular-value decomposition. The main benefits are the decomposition of variance into components along principal axes, which provide the numerical diagnostics known as contributions, and the estimation of nonnegative weights for each variable. The idea is inspired by the distance functions used in correspondence analysis and in principal component analysis of standardized data, where the normalizations inherent in the distances can be considered as differential weighting of the variables. In weighted Euclidean biplots we allow these weights to be unknown parameters, which are estimated from the data to maximize the fit to the chosen distances or dissimilarities. These weights are estimated using a majorization algorithm. Once this extra weight-estimation step is accomplished, the procedure follows the classical path in decomposing the matrix and displaying its rows and columns in biplots.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multiexponential decays may contain time-constants differing in several orders of magnitudes. In such cases, uniform sampling results in very long records featuring a high degree of oversampling at the final part of the transient. Here, we analyze a nonlinear time scale transformation to reduce the total number of samples with minimum signal distortion, achieving an important reduction of the computational cost of subsequent analyses. We propose a time-varying filter whose length is optimized for minimum mean square error

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The analysis of multiexponential decays is challenging because of their complex nature. When analyzing these signals, not only the parameters, but also the orders of the models, have to be estimated. We present an improved spectroscopic technique specially suited for this purpose. The proposed algorithm combines an iterative linear filter with an iterative deconvolution method. A thorough analysis of the noise effect is presented. The performance is tested with synthetic and experimental data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Whereas numerical modeling using finite-element methods (FEM) can provide transient temperature distribution in the component with enough accuracy, it is of the most importance the development of compact dynamic thermal models that can be used for electrothermal simulation. While in most cases single power sources are considered, here we focus on the simultaneous presence of multiple sources. The thermal model will be in the form of a thermal impedance matrix containing the thermal impedance transfer functions between two arbitrary ports. Eachindividual transfer function element ( ) is obtained from the analysis of the thermal temperature transient at node ¿ ¿ after a power step at node ¿ .¿ Different options for multiexponential transient analysis are detailed and compared. Among the options explored, small thermal models can be obtained by constrained nonlinear least squares (NLSQ) methods if the order is selected properly using validation signals. The methods are applied to the extraction of dynamic compact thermal models for a new ultrathin chip stack technology (UTCS).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La regressió basada en distàncies és un mètode de predicció que consisteix en dos passos: a partir de les distàncies entre observacions obtenim les variables latents, les quals passen a ser els regressors en un model lineal de mínims quadrats ordinaris. Les distàncies les calculem a partir dels predictors originals fent us d'una funció de dissimilaritats adequada. Donat que, en general, els regressors estan relacionats de manera no lineal amb la resposta, la seva selecció amb el test F usual no és possible. En aquest treball proposem una solució a aquest problema de selecció de predictors definint tests estadístics generalitzats i adaptant un mètode de bootstrap no paramètric per a l'estimació dels p-valors. Incluim un exemple numèric amb dades de l'assegurança d'automòbils.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: Health status measures usually have an asymmetric distribution and present a highpercentage of respondents with the best possible score (ceiling effect), specially when they areassessed in the overall population. Different methods to model this type of variables have beenproposed that take into account the ceiling effect: the tobit models, the Censored Least AbsoluteDeviations (CLAD) models or the two-part models, among others. The objective of this workwas to describe the tobit model, and compare it with the Ordinary Least Squares (OLS) model,that ignores the ceiling effect.Methods: Two different data sets have been used in order to compare both models: a) real datacomming from the European Study of Mental Disorders (ESEMeD), in order to model theEQ5D index, one of the measures of utilities most commonly used for the evaluation of healthstatus; and b) data obtained from simulation. Cross-validation was used to compare thepredicted values of the tobit model and the OLS models. The following estimators werecompared: the percentage of absolute error (R1), the percentage of squared error (R2), the MeanSquared Error (MSE) and the Mean Absolute Prediction Error (MAPE). Different datasets werecreated for different values of the error variance and different percentages of individuals withceiling effect. The estimations of the coefficients, the percentage of explained variance and theplots of residuals versus predicted values obtained under each model were compared.Results: With regard to the results of the ESEMeD study, the predicted values obtained with theOLS model and those obtained with the tobit models were very similar. The regressioncoefficients of the linear model were consistently smaller than those from the tobit model. In thesimulation study, we observed that when the error variance was small (s=1), the tobit modelpresented unbiased estimations of the coefficients and accurate predicted values, specially whenthe percentage of individuals wiht the highest possible score was small. However, when theerrror variance was greater (s=10 or s=20), the percentage of explained variance for the tobitmodel and the predicted values were more similar to those obtained with an OLS model.Conclusions: The proportion of variability accounted for the models and the percentage ofindividuals with the highest possible score have an important effect in the performance of thetobit model in comparison with the linear model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we propose a generalization of the density functional theory. The theory leads to single-particle equations of motion with a quasilocal mean-field operator, which contains a quasiparticle position-dependent effective mass and a spin-orbit potential. The energy density functional is constructed using the extended Thomas-Fermi approximation and the ground-state properties of doubly magic nuclei are considered within the framework of this approach. Calculations were performed using the finite-range Gogny D1S forces and the results are compared with the exact Hartree-Fock calculations

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Isotopic and isotonic chains of superheavy nuclei are analyzed to search for spherical double shell closures beyond Z=82 and N=126 within the new effective field theory model of Furnstahl, Serot, and Tang for the relativistic nuclear many-body problem. We take into account several indicators to identify the occurrence of possible shell closures, such as two-nucleon separation energies, two-nucleon shell gaps, average pairing gaps, and the shell correction energy. The effective Lagrangian model predicts N=172 and Z=120 and N=258 and Z=120 as spherical doubly magic superheavy nuclei, whereas N=184 and Z=114 show some magic character depending on the parameter set. The magicity of a particular neutron (proton) number in the analyzed mass region is found to depend on the number of protons (neutrons) present in the nucleus.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Due to the immiscibility of 3He into 4He at very low temperatures, mixed helium droplets consist of a core of 4He atoms coated by a 3He layer whose thickness depends on the number of atoms of each isotope. When these numbers are such that the centrifugal kinetic energy of the 3He atoms is small and can be considered as a perturbation to the mean-field energy, a novel shell structure arises, with magic numbers different from these of pure 3He droplets. If the outermost shell is not completely filled, the valence atoms align their spins up to the maximum value allowed by the Pauli principle.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La regressió basada en distàncies és un mètode de predicció que consisteix en dos passos: a partir de les distàncies entre observacions obtenim les variables latents, les quals passen a ser els regressors en un model lineal de mínims quadrats ordinaris. Les distàncies les calculem a partir dels predictors originals fent us d'una funció de dissimilaritats adequada. Donat que, en general, els regressors estan relacionats de manera no lineal amb la resposta, la seva selecció amb el test F usual no és possible. En aquest treball proposem una solució a aquest problema de selecció de predictors definint tests estadístics generalitzats i adaptant un mètode de bootstrap no paramètric per a l'estimació dels p-valors. Incluim un exemple numèric amb dades de l'assegurança d'automòbils.