90 resultados para UNIFORM BOUNDEDNESS
Resumo:
We present an experimental and numerical study on the influence that particle aspect ratio has on the mechanical and structural properties of granular packings. For grains with maximal symmetry (squares), the stress propagation in the packing localizes forming chainlike forces analogous to the ones observed for spherical grains. This scenario can be understood in terms of stochastic models of aggregation and random multiplicative processes. As the grains elongate, the stress propagation is strongly affected. The interparticle normal force distribution tends toward a Gaussian, and, correspondingly, the force chains spread leading to a more uniform stress distribution reminiscent of the hydrostatic profiles known for standard liquids
Resumo:
Paltridge found reasonable values for the most significant climatic variables through maximizing the material transport part of entropy production by using a simple box model. Here, we analyse Paltridge's box model to obtain the energy and the entropy balance equations separately. Derived expressions for global entropy production, which is a function of the radiation field, and even its material transport component, are shown to be different from those used by Paltridge. Plausible climatic states are found at extrema of these parameters. Feasible results are also obtained by minimizing the radiation part of entropy production, in agreement with one of Planck's results, Finally, globally averaged values of the entropy flux of radiation and material entropy production are obtained for two dynamical extreme cases: an earth with uniform temperature, and an earth in radiative equilibrium at each latitudinal point
Resumo:
Quantum molecular similarity (QMS) techniques are used to assess the response of the electron density of various small molecules to application of a static, uniform electric field. Likewise, QMS is used to analyze the changes in electron density generated by the process of floating a basis set. The results obtained show an interrelation between the floating process, the optimum geometry, and the presence of an external field. Cases involving the Le Chatelier principle are discussed, and an insight on the changes of bond critical point properties, self-similarity values and density differences is performed
Resumo:
This paper presents a new registration algorithm, called Temporal Di eomorphic Free Form Deformation (TDFFD), and its application to motion and strain quanti cation from a sequence of 3D ultrasound (US) images. The originality of our approach resides in enforcing time consistency by representing the 4D velocity eld as the sum of continuous spatiotemporal B-Spline kernels. The spatiotemporal displacement eld is then recovered through forward Eulerian integration of the non-stationary velocity eld. The strain tensor iscomputed locally using the spatial derivatives of the reconstructed displacement eld. The energy functional considered in this paper weighs two terms: the image similarity and a regularization term. The image similarity metric is the sum of squared di erences between the intensities of each frame and a reference one. Any frame in the sequence can be chosen as reference. The regularization term is based on theincompressibility of myocardial tissue. TDFFD was compared to pairwise 3D FFD and 3D+t FFD, bothon displacement and velocity elds, on a set of synthetic 3D US images with di erent noise levels. TDFFDshowed increased robustness to noise compared to these two state-of-the-art algorithms. TDFFD also proved to be more resistant to a reduced temporal resolution when decimating this synthetic sequence. Finally, this synthetic dataset was used to determine optimal settings of the TDFFD algorithm. Subsequently, TDFFDwas applied to a database of cardiac 3D US images of the left ventricle acquired from 9 healthy volunteers and 13 patients treated by Cardiac Resynchronization Therapy (CRT). On healthy cases, uniform strain patterns were observed over all myocardial segments, as physiologically expected. On all CRT patients, theimprovement in synchrony of regional longitudinal strain correlated with CRT clinical outcome as quanti ed by the reduction of end-systolic left ventricular volume at follow-up (6 and 12 months), showing the potential of the proposed algorithm for the assessment of CRT.
Resumo:
Error-correcting codes and matroids have been widely used in the study of ordinary secret sharing schemes. In this paper, the connections between codes, matroids, and a special class of secret sharing schemes, namely, multiplicative linear secret sharing schemes (LSSSs), are studied. Such schemes are known to enable multiparty computation protocols secure against general (nonthreshold) adversaries.Two open problems related to the complexity of multiplicative LSSSs are considered in this paper. The first one deals with strongly multiplicative LSSSs. As opposed to the case of multiplicative LSSSs, it is not known whether there is an efficient method to transform an LSSS into a strongly multiplicative LSSS for the same access structure with a polynomial increase of the complexity. A property of strongly multiplicative LSSSs that could be useful in solving this problem is proved. Namely, using a suitable generalization of the well-known Berlekamp–Welch decoder, it is shown that all strongly multiplicative LSSSs enable efficient reconstruction of a shared secret in the presence of malicious faults. The second one is to characterize the access structures of ideal multiplicative LSSSs. Specifically, the considered open problem is to determine whether all self-dual vector space access structures are in this situation. By the aforementioned connection, this in fact constitutes an open problem about matroid theory, since it can be restated in terms of representability of identically self-dual matroids by self-dual codes. A new concept is introduced, the flat-partition, that provides a useful classification of identically self-dual matroids. Uniform identically self-dual matroids, which are known to be representable by self-dual codes, form one of the classes. It is proved that this property also holds for the family of matroids that, in a natural way, is the next class in the above classification: the identically self-dual bipartite matroids.
Resumo:
In this paper we consider a location and pricing model for a retail firm that wants to enter a spatial market where a competitor firm is already operating as a monopoly with several outlets. The entering firms seeks to determine the optimal uniform mill price and its servers' locations that maximizes profits given the reaction in price of the competitor firm to its entrance. A tabu search procedure is presentedto solve the model together with computational experience.
Resumo:
This paper studies the rate of convergence of an appropriatediscretization scheme of the solution of the Mc Kean-Vlasovequation introduced by Bossy and Talay. More specifically,we consider approximations of the distribution and of thedensity of the solution of the stochastic differentialequation associated to the Mc Kean - Vlasov equation. Thescheme adopted here is a mixed one: Euler/weakly interactingparticle system. If $n$ is the number of weakly interactingparticles and $h$ is the uniform step in the timediscretization, we prove that the rate of convergence of thedistribution functions of the approximating sequence in the $L^1(\Omega\times \Bbb R)$ norm and in the sup norm is of theorder of $\frac 1{\sqrt n} + h $, while for the densities is ofthe order $ h +\frac 1 {\sqrt {nh}}$. This result is obtainedby carefully employing techniques of Malliavin Calculus.
Resumo:
We introduce a simple new hypothesis testing procedure, which,based on an independent sample drawn from a certain density, detects which of $k$ nominal densities is the true density is closest to, under the total variation (L_{1}) distance. Weobtain a density-free uniform exponential bound for the probability of false detection.
Resumo:
Let a class $\F$ of densities be given. We draw an i.i.d.\ sample from a density $f$ which may or may not be in $\F$. After every $n$, one must make a guess whether $f \in \F$ or not. A class is almost surely testable if there exists such a testing sequence such that for any $f$, we make finitely many errors almost surely. In this paper, several results are given that allowone to decide whether a class is almost surely testable. For example, continuity and square integrability are not testable, but unimodality, log-concavity, and boundedness by a given constant are.
Resumo:
Whereas much literature exists on choice overload, little is known about effects of numbers of alternatives in donation decisions. How do these affect both the size and distribution of donations? We hypothesize that donations are affected by the reputation of recipients and increase with their number, albeit at a decreasing rate. Allocations to recipients reflect different concepts of fairness equity and equality. Both may be employed but, since they differ in cognitive and emotional costs, numbers of recipients are important. Using a cognitive (emotional) argument, distributions become more uniform (skewed) as numbers increase. In a survey, respondents indicated how they would donate lottery winnings of 50 Euros. Results indicated that more was donated to NGO s that respondents knew better. Second, total donations increased with the number of recipients albeit at a decreasing rate. Third, distributions of donations became more skewed as numbers increased. We comment on theoretical and practical implications.
Resumo:
The influence of the basis set size and the correlation energy in the static electrical properties of the CO molecule is assessed. In particular, we have studied both the nuclear relaxation and the vibrational contributions to the static molecular electrical properties, the vibrational Stark effect (VSE) and the vibrational intensity effect (VIE). From a mathematical point of view, when a static and uniform electric field is applied to a molecule, the energy of this system can be expressed in terms of a double power series with respect to the bond length and to the field strength. From the power series expansion of the potential energy, field-dependent expressions for the equilibrium geometry, for the potential energy and for the force constant are obtained. The nuclear relaxation and vibrational contributions to the molecular electrical properties are analyzed in terms of the derivatives of the electronic molecular properties. In general, the results presented show that accurate inclusion of the correlation energy and large basis sets are needed to calculate the molecular electrical properties and their derivatives with respect to either nuclear displacements or/and field strength. With respect to experimental data, the calculated power series coefficients are overestimated by the SCF, CISD, and QCISD methods. On the contrary, perturbation methods (MP2 and MP4) tend to underestimate them. In average and using the 6-311 + G(3df) basis set and for the CO molecule, the nuclear relaxation and the vibrational contributions to the molecular electrical properties amount to 11.7%, 3.3%, and 69.7% of the purely electronic μ, α, and β values, respectively
Resumo:
We investigate the hypothesis that the atmosphere is constrained to maximize its entropy production by using a one-dimensional (1-D) vertical model. We prescribe the lapse rate in the convective layer as that of the standard troposphere. The assumption that convection sustains a critical lapse rate was absent in previous studies, which focused on the vertical distribution of climatic variables, since such a convective adjustment reduces the degrees of freedom of the system and may prevent the application of the maximum entropy production (MEP) principle. This is not the case in the radiative–convective model (RCM) developed here, since we accept a discontinuity of temperatures at the surface similar to that adopted in many RCMs. For current conditions, the MEP state gives a difference between the ground temperature and the air temperature at the surface ≈10 K. In comparison, conventional RCMs obtain a discontinuity ≈2 K only. However, the surface boundary layer velocity in the MEP state appears reasonable (≈3 m s-¹). Moreover, although the convective flux at the surface in MEP states is almost uniform in optically thick atmospheres, it reaches a maximum value for an optical thickness similar to current conditions. This additional result may support the maximum convection hypothesis suggested by Paltridge (1978)
Resumo:
Un dels problemes típics de regulació en el camp de l’automatització industrial és el control de velocitat lineal d’entrada del fil a les bobines, ja que com més gruix acumulem a igual velocitat de rotació de la bobina s’augmenta notablement la velocitat lineal d’entrada del fil, aquest desajust s’ha de poder compensar de forma automàtica per aconseguir una velocitat d’entrada constant. Aquest problema de regulació de velocitats és molt freqüent i de difícil control a la indústria on intervé el bobinat d’algun tipus de material com cablejat, fil, paper, làmines de planxa, tubs, etc... Els dos reptes i objectius principals són, primer, la regulació de la velocitat de rotació de la bobina per aconseguir una velocitat lineal del fil d’entrada, i segon, mitjançant el guiatge de l’alimentació de fil a la bobina, aconseguir un repartiment uniforme de cada capa de fil. El desenvolupament consisteix amb l’automatització i control d’una bobinadora automàtica mitjançant la configuració i programació de PLC’s, servomotors i encoders. Finalment es farà el muntatge pràctic sobre una bancada per verificar i simular el seu correcte funcionament que ha de donar solució a aquests problemes de regulació de velocitats. Com a conclusions finals s’han aconseguit els objectius i una metodologia per fer una regulació de velocitats de rotació per bobines, amb accionaments de servomotors amb polsos, i a nivell de coneixements he aconseguit dominar les aplicacions d’aquest tipus d’accionaments aplicats a construccions mecàniques.
Resumo:
The development and tests of an iterative reconstruction algorithm for emission tomography based on Bayesian statistical concepts are described. The algorithm uses the entropy of the generated image as a prior distribution, can be accelerated by the choice of an exponent, and converges uniformly to feasible images by the choice of one adjustable parameter. A feasible image has been defined as one that is consistent with the initial data (i.e. it is an image that, if truly a source of radiation in a patient, could have generated the initial data by the Poisson process that governs radioactive disintegration). The fundamental ideas of Bayesian reconstruction are discussed, along with the use of an entropy prior with an adjustable contrast parameter, the use of likelihood with data increment parameters as conditional probability, and the development of the new fast maximum a posteriori with entropy (FMAPE) Algorithm by the successive substitution method. It is shown that in the maximum likelihood estimator (MLE) and FMAPE algorithms, the only correct choice of initial image for the iterative procedure in the absence of a priori knowledge about the image configuration is a uniform field.
Resumo:
Llista de títols uniformes d'obres clássiques anónimes de la literatura catalana deis segles XII-XVII i de les referéncies a aquests títols. En cada una de les 266 entrades s'hi ¡nclou -quan es pertinent- la informado següent: títol uniforme, segle d'escriptura, bibliografía, comentan sobre l'obra, biblioteca o dipósit (si només se'n conserva un manuscrit), títols variants i tftols de versions en altres llengües. La confecció de la llista s'emmarca dins dels treball d'actualització de l'obra Anonymous classics i, per ser efectiva, haurá de ser aprovada -amb les modificacions que es creguin oportunes- per la Biblioteca de Catalunya.