942 resultados para Hilbert schemes of points Poincaré polynomial Betti numbers Goettsche formula
Resumo:
We analyse in a common framework the properties of the Voronoi tessellations resulting from regular 2D and 3D crystals and those of tessellations generated by Poisson distributions of points, thus joining on symmetry breaking processes and the approach to uniform random distributions of seeds. We perturb crystalline structures in 2D and 3D with a spatial Gaussian noise whose adimensional strength is α and analyse the statistical properties of the cells of the resulting Voronoi tessellations using an ensemble approach. In 2D we consider triangular, square and hexagonal regular lattices, resulting into hexagonal, square and triangular tessellations, respectively. In 3D we consider the simple cubic (SC), body-centred cubic (BCC), and face-centred cubic (FCC) crystals, whose corresponding Voronoi cells are the cube, the truncated octahedron, and the rhombic dodecahedron, respectively. In 2D, for all values α>0, hexagons constitute the most common class of cells. Noise destroys the triangular and square tessellations, which are structurally unstable, as their topological properties are discontinuous in α=0. On the contrary, the honeycomb hexagonal tessellation is topologically stable and, experimentally, all Voronoi cells are hexagonal for small but finite noise with α<0.12. Basically, the same happens in the 3D case, where only the tessellation of the BCC crystal is topologically stable even against noise of small but finite intensity. In both 2D and 3D cases, already for a moderate amount of Gaussian noise (α>0.5), memory of the specific initial unperturbed state is lost, because the statistical properties of the three perturbed regular tessellations are indistinguishable. When α>2, results converge to those of Poisson-Voronoi tessellations. In 2D, while the isoperimetric ratio increases with noise for the perturbed hexagonal tessellation, for the perturbed triangular and square tessellations it is optimised for specific value of noise intensity. The same applies in 3D, where noise degrades the isoperimetric ratio for perturbed FCC and BCC lattices, whereas the opposite holds for perturbed SCC lattices. This allows for formulating a weaker form of the Kelvin conjecture. By analysing jointly the statistical properties of the area and of the volume of the cells, we discover that also the cells shape heavily fluctuates when noise is introduced in the system. In 2D, the geometrical properties of n-sided cells change with α until the Poisson-Voronoi limit is reached for α>2; in this limit the Desch law for perimeters is shown to be not valid and a square root dependence on n is established, which agrees with exact asymptotic results. Anomalous scaling relations are observed between the perimeter and the area in the 2D and between the areas and the volumes of the cells in 3D: except for the hexagonal (2D) and FCC structure (3D), this applies also for infinitesimal noise. In the Poisson-Voronoi limit, the anomalous exponent is about 0.17 in both the 2D and 3D case. A positive anomaly in the scaling indicates that large cells preferentially feature large isoperimetric quotients. As the number of faces is strongly correlated with the sphericity (cells with more faces are bulkier), in 3D it is shown that the anomalous scaling is heavily reduced when we perform power law fits separately on cells with a specific number of faces.
Resumo:
This paper reports the findings from two large scale national on-line surveys carried out in 2009 and 2010, which explored the state of history teaching in English secondary schools. Large variation in provision was identified within comprehensive schools in response to national policy decisions and initiatives. Using the data from the surveys and school level data that is publicly available, this study examines situated factors, particularly the nature of the school intake, the numbers of pupils with special educational needs and the socio-economic status of the area surrounding the school, and the impact these have on the provision of history education. The findings show that there is a growing divide between those students that have access to the ‘powerful knowledge’, provided by subjects like history, and those that do not.
Resumo:
In this paper a support vector machine (SVM) approach for characterizing the feasible parameter set (FPS) in non-linear set-membership estimation problems is presented. It iteratively solves a regression problem from which an approximation of the boundary of the FPS can be determined. To guarantee convergence to the boundary the procedure includes a no-derivative line search and for an appropriate coverage of points on the FPS boundary it is suggested to start with a sequential box pavement procedure. The SVM approach is illustrated on a simple sine and exponential model with two parameters and an agro-forestry simulation model.
Resumo:
Abstract. We prove that the vast majority of JC∗-triples satisfy the condition of universal reversibility. Our characterisation is that a JC∗-triple is universally reversible if and only if it has no triple homomorphisms onto Hilbert spaces of dimension greater than two nor onto spin factors of dimension greater than four. We establish corresponding characterisations in the cases of JW∗-triples and of TROs (regarded as JC∗-triples). We show that the distinct natural operator space structures on a universally reversible JC∗-triple E are in bijective correspondence with a distinguished class of ideals in its universal TRO, identify the Shilov boundaries of these operator spaces and prove that E has a unique natural operator space structure precisely when E contains no ideal isometric to a nonabelian TRO. We deduce some decomposition and completely contractive properties of triple homomorphisms on TROs.
Resumo:
In their comment on my 1990 article, Yeh, Suwanakul, and Mai extend my analysis-which focused attention exclusively on firm output-to allow for simultaneous endogeneity of price, aggregate output, and numbers of firms. They show that, with downward- sloping demand, industry output adjusts positively to revenue-neutral changes in the marginal rate of taxation. This result is significant for two reasons. First, we are more often interested in predictions about aggregate phenomena than we are in predictions about individual firms. Indeed, firm-level predictions are frequently irrefutable since firm data are often unavailable. Second, the authors derive their result under a set of conditions that appear to be more general than those invoked in my 1990 article. In particular, they circumvent the need to invoke specific assumptions about the nature of firms' aversions toward risk. I consider this a useful extension and I appreciate the careful scrutiny of my paper.
Resumo:
Drought characterisation is an intrinsically spatio-temporal problem. A limitation of previous approaches to characterisation is that they discard much of the spatio-temporal information by reducing events to a lower-order subspace. To address this, an explicit 3-dimensional (longitude, latitude, time) structure-based method is described in which drought events are defined by a spatially and temporarily coherent set of points displaying standardised precipitation below a given threshold. Geometric methods can then be used to measure similarity between individual drought structures. Groupings of these similarities provide an alternative to traditional methods for extracting recurrent space-time signals from geophysical data. The explicit consideration of structure encourages the construction of summary statistics which relate to the event geometry. Example measures considered are the event volume, centroid, and aspect ratio. The utility of a 3-dimensional approach is demonstrated by application to the analysis of European droughts (15 °W to 35°E, and 35 °N to 70°N) for the period 1901–2006. Large-scale structure is found to be abundant with 75 events identified lasting for more than 3 months and spanning at least 0.5 × 106 km2. Near-complete dissimilarity is seen between the individual drought structures, and little or no regularity is found in the time evolution of even the most spatially similar drought events. The spatial distribution of the event centroids and the time evolution of the geographic cross-sectional areas strongly suggest that large area, sustained droughts result from the combination of multiple small area (∼106 km2) short duration (∼3 months) events. The small events are not found to occur independently in space. This leads to the hypothesis that local water feedbacks play an important role in the aggregation process.
Resumo:
This work presents two schemes of measuring the linear and angular kinematics of a rigid body using a kinematically redundant array of triple-axis accelerometers with potential applications in biomechanics. A novel angular velocity estimation algorithm is proposed and evaluated that can compensate for angular velocity errors using measurements of the direction of gravity. Analysis and discussion of optimal sensor array characteristics are provided. A damped 2 axis pendulum was used to excite all 6 DoF of the a suspended accelerometer array through determined complex motion and is the basis of both simulation and experimental studies. The relationship between accuracy and sensor redundancy is investigated for arrays of up to 100 triple axis (300 accelerometer axes) accelerometers in simulation and 10 equivalent sensors (30 accelerometer axes) in the laboratory test rig. The paper also reports on the sensor calibration techniques and hardware implementation.
Resumo:
It is often assumed that humans generate a 3D reconstruction of the environment, either in egocentric or world-based coordinates, but the steps involved are unknown. Here, we propose two reconstruction-based models, evaluated using data from two tasks in immersive virtual reality. We model the observer’s prediction of landmark location based on standard photogrammetric methods and then combine location predictions to compute likelihood maps of navigation behaviour. In one model, each scene point is treated independently in the reconstruction; in the other, the pertinent variable is the spatial relationship between pairs of points. Participants viewed a simple environment from one location, were transported (virtually) to another part of the scene and were asked to navigate back. Error distributions varied substantially with changes in scene layout; we compared these directly with the likelihood maps to quantify the success of the models. We also measured error distributions when participants manipulated the location of a landmark to match the preceding interval, providing a direct test of the landmark-location stage of the navigation models. Models such as this, which start with scenes and end with a probabilistic prediction of behaviour, are likely to be increasingly useful for understanding 3D vision.
Resumo:
Consider the massless Dirac operator on a 3-torus equipped with Euclidean metric and standard spin structure. It is known that the eigenvalues can be calculated explicitly: the spectrum is symmetric about zero and zero itself is a double eigenvalue. The aim of the paper is to develop a perturbation theory for the eigenvalue with smallest modulus with respect to perturbations of the metric. Here the application of perturbation techniques is hindered by the fact that eigenvalues of the massless Dirac operator have even multiplicity, which is a consequence of this operator commuting with the antilinear operator of charge conjugation (a peculiar feature of dimension 3). We derive an asymptotic formula for the eigenvalue with smallest modulus for arbitrary perturbations of the metric and present two particular families of Riemannian metrics for which the eigenvalue with smallest modulus can be evaluated explicitly. We also establish a relation between our asymptotic formula and the eta invariant.
Resumo:
Single-column models (SCM) are useful test beds for investigating the parameterization schemes of numerical weather prediction and climate models. The usefulness of SCM simulations are limited, however, by the accuracy of the best estimate large-scale observations prescribed. Errors estimating the observations will result in uncertainty in modeled simulations. One method to address the modeled uncertainty is to simulate an ensemble where the ensemble members span observational uncertainty. This study first derives an ensemble of large-scale data for the Tropical Warm Pool International Cloud Experiment (TWP-ICE) based on an estimate of a possible source of error in the best estimate product. These data are then used to carry out simulations with 11 SCM and two cloud-resolving models (CRM). Best estimate simulations are also performed. All models show that moisture-related variables are close to observations and there are limited differences between the best estimate and ensemble mean values. The models, however, show different sensitivities to changes in the forcing particularly when weakly forced. The ensemble simulations highlight important differences in the surface evaporation term of the moisture budget between the SCM and CRM. Differences are also apparent between the models in the ensemble mean vertical structure of cloud variables, while for each model, cloud properties are relatively insensitive to forcing. The ensemble is further used to investigate cloud variables and precipitation and identifies differences between CRM and SCM particularly for relationships involving ice. This study highlights the additional analysis that can be performed using ensemble simulations and hence enables a more complete model investigation compared to using the more traditional single best estimate simulation only.
Resumo:
Transreal arithmetic totalises real arithmetic by defining division by zero in terms of three definite, non-finite numbers: positive infinity, negative infinity and nullity. We describe the transreal tangent function and extend continuity and limits from the real domain to the transreal domain. With this preparation, we extend the real derivative to the transreal derivative and extend proper integration from the real domain to the transreal domain. Further, we extend improper integration of absolutely convergent functions from the real domain to the transreal domain. This demonstrates that transreal calculus contains real calculus and operates at singularities where real calculus fails.
Resumo:
Multibiometrics aims at improving biometric security in presence of spoofing attempts, but exposes a larger availability of points of attack. Standard fusion rules have been shown to be highly sensitive to spoofing attempts – even in case of a single fake instance only. This paper presents a novel spoofing-resistant fusion scheme proposing the detection and elimination of anomalous fusion input in an ensemble of evidence with liveness information. This approach aims at making multibiometric systems more resistant to presentation attacks by modeling the typical behaviour of human surveillance operators detecting anomalies as employed in many decision support systems. It is shown to improve security, while retaining the high accuracy level of standard fusion approaches on the latest Fingerprint Liveness Detection Competition (LivDet) 2013 dataset.
Resumo:
Pre-eclampsia (PE) complicates around 3% of all pregnancies and is one of the most common causes of maternal mortality worldwide. The pathophysiology of PE remains unclear however its underlying cause originates from the placenta and manifests as raised blood pressure, proteinuria, vascular or systemic inflammation and hypercoagulation in the mother. Women who develop PE are also at significantly higher risk of subsequently developing cardiovascular (CV) disease. In PE, the failing endoplasmic reticulum, oxidative and inflammatory stressed syncytiotrophoblast layer of the placenta sheds increased numbers of syncytiotrophoblast extracellular vesicles (STBEV) into the maternal circulation. Platelet reactivity, size and concentration are also known to be altered in some women who develop PE, although the underlying reasons for this have not been determined. In this study we show that STBEV from disease free placenta isolated ex vivo by dual placental perfusion associate rapidly with platelets. We provide evidence that STBEV isolated from normal placentas cause platelet activation and that this is increased with STBEV from PE pregnancies. Furthermore, treatment of platelets with aspirin, currently prescribed for women at high risk of PE to reduce platelet aggregation, also inhibits STBEV-induced reversible aggregation of washed platelets. Increased platelet reactivity as a result of exposure to PE placenta derived STBEVs correlates with increased thrombotic risk associated with PE. These observations establish a possible direct link between the clotting disturbances of PE and dysfunction of the placenta, as well as the known increased risk of thromboembolism associated with this condition.
Resumo:
In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, s ≤ n, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p × q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) ≤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) ≤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved.
Resumo:
The problem of projecting multidimensional data into lower dimensions has been pursued by many researchers due to its potential application to data analyses of various kinds. This paper presents a novel multidimensional projection technique based on least square approximations. The approximations compute the coordinates of a set of projected points based on the coordinates of a reduced number of control points with defined geometry. We name the technique Least Square Projections ( LSP). From an initial projection of the control points, LSP defines the positioning of their neighboring points through a numerical solution that aims at preserving a similarity relationship between the points given by a metric in mD. In order to perform the projection, a small number of distance calculations are necessary, and no repositioning of the points is required to obtain a final solution with satisfactory precision. The results show the capability of the technique to form groups of points by degree of similarity in 2D. We illustrate that capability through its application to mapping collections of textual documents from varied sources, a strategic yet difficult application. LSP is faster and more accurate than other existing high-quality methods, particularly where it was mostly tested, that is, for mapping text sets.