44 resultados para Convex Polygon
Resumo:
1944/1945 wurde in Cham-Hagendorn eine Wassermühle ausgegraben, die dank ihrer aussergewöhnlich guten Holzerhaltung seit langem einen prominenten Platz in der Forschung einnimmt. 2003 und 2004 konnte die Kantonsarchäologie Zug den Platz erneut archäologisch untersuchen. Dabei wurden nicht nur weitere Reste der Wassermühle, sondern auch Spuren älterer und jüngerer Anlagen geborgen: eine ältere und eine jüngere Schmiedewerkstatt (Horizont 1a/Horizont 3) sowie ein zweiphasiges Heiligtum (Horizonte 1a/1b). All diese Anlagen lassen sich nun in das in den neuen Grabungen erkannte stratigraphische Gerüst einhängen (s. Beil. 2). Dank der Holzerhaltung können die meisten Phasen dendrochronologisch datiert werden (s. Abb. 4.1/1a): Horizont 1a mit Schlagdaten zwischen 162(?)/173 und 200 n. Chr., Horizont 1b um 215/218 n. Chr. und Horizont 2 um 231 n. Chr. Ferner konnten in den neuen Grabungen Proben für mikromorphologische und archäobotanische Untersuchungen entnommen werden (Kap. 2.2; 3.11). In der vorliegenden Publikation werden der Befund und die Baustrukturen vorgelegt, (Kap. 2), desgleichen sämtliche stratifizierten Funde und eine umfassende Auswahl der 1944/1945 geborgenen Funde (Kap. 3). Dank anpassender Fragmente, sog. Passscherben, lassen sich diese zum Teil nachträglich in die Schichtenabfolge einbinden. Die mikromorphologischen und die archäobotanischen Untersuchungen (Kap. 2.2; 3.11) zeigen, dass der Fundplatz in römischer Zeit inmitten einer stark vom Wald und dem Fluss Lorze geprägten Landschaft lag. In unmittelbarer Nähe können weder eine Siedlung noch einzelne Wohnbauten gelegen haben. Die demnach nur gewerblich und sakral genutzten Anlagen standen an einem Bach, der vermutlich mit jenem Bach identisch ist, der noch heute das Groppenmoos entwässert und bei Cham-Hagendorn in die Lorze mündet (s. Abb. 2.4/1). Der antike Bach führte wiederholt Hochwasser ─ insgesamt sind fünf grössere Überschwemmungsphasen auszumachen (Kap. 2.2; 2.4). Wohl anlässlich eines Seehochstandes durch ein Überschwappen der Lorze in den Bach ausgelöst, müssen diese Überschwemmungen eine enorme Gewalt entwickelt haben, der die einzelnen Anlagen zum Opfer fielen. Wie die Untersuchung der Siedlungslandschaft römischer Zeit rund um den Zugersee wahrscheinlich macht (Kap. 6 mit Abb. 6.2/2), dürften die Anlagen von Cham-Hagendorn zu einer in Cham-Heiligkreuz vermuteten Villa gehören, einem von fünf grösseren Landgütern in diesem Gebiet. Hinweise auf Vorgängeranlagen fehlen, mit denen die vereinzelten Funde des 1. Jh. n. Chr. (Kap. 4.5) in Verbindung gebracht werden könnten. Diese dürften eher von einer der Überschwemmungen bachaufwärts weggerissen und nach Cham-Hagendorn eingeschwemmt worden sein. Die Nutzung des Fundplatzes (Horizont 1a; s. Beil. 6) setzte um 170 n. Chr. mit einer Schmiedewerkstatt ein (Kap. 2.5.1). Der Fundanfall, insbesondere die Schmiedeschlacken (Kap. 3.9) belegen, dass hier nur hin und wieder Geräte hergestellt und repariert wurden (Kap. 5.2). Diese Werkstatt war vermutlich schon aufgelassen und dem Verfall preisgegeben, als man 200 n. Chr. (Kap. 4.2.4) auf einer Insel zwischen dem Bach und einem Lorzearm ein Heiligtum errichtete (Kap. 5.3). Beleg für den sakralen Status dieser Insel ist in erster Linie mindestens ein eigens gepflanzter Pfirsichbaum, nachgewiesen mit Pollen, einem Holz und über 400 Pfirsichsteinen (Kap. 3.11). Die im Bach verlaufende Grenze zwischen dem sakralen Platz und der profanen Umgebung markierte man zusätzlich mit einer Pfahlreihe (Kap. 2.5.3). In diese war ein schmaler Langbau integriert (Kap. 2.5.2), der an die oft an Temenosmauern antiker Heiligtümer angebauten Portiken erinnert und wohl auch die gleiche Funktion wie diese gehabt hatte, nämlich das Aufbewahren von Weihegaben und Kultgerät (Kap. 5.3). Das reiche Fundmaterial, das sich in den Schichten der ersten Überschwemmung fand (s. Abb. 5./5), die um 205/210 n. Chr. dieses Heiligtum zerstört hatte, insbesondere die zahlreiche Keramik (Kap. 3.2.4), und die zum Teil auffallend wertvollen Kleinfunde (Kap. 3.3.3), dürften zum grössten Teil einst in diesem Langbau untergebracht gewesen sein. Ein als Glockenklöppel interpretiertes, stratifiziertes Objekt spricht dafür, dass die fünf grossen, 1944/1945 als Stapel aufgefundenen Eisenglocken vielleicht auch dem Heiligtum zuzuweisen sind (Kap. 3.4). In diesen Kontext passen zudem die überdurchschnittlich häufig kalzinierten Tierknochen (Kap. 3.10). Nach der Überschwemmung befestigte man für 215 n. Chr. (Kap. 4.2.4) das unterspülte Bachufer mit einer Uferverbauung (Kap. 2.6.1). Mit dem Bau eines weiteren, im Bach stehenden Langbaus (Kap. 2.6.2) stellte man 218 n. Chr. das Heiligtum auf der Insel in ähnlicher Form wieder her (Horizont 1b; s. Beil. 7). Von der Pfahlreihe, die wiederum die sakrale Insel von der profanen Umgebung abgrenzte, blieben indes nur wenige Pfähle erhalten. Dennoch ist der sakrale Charakter der Anlage gesichert. Ausser dem immer noch blühenden Pfirsichbaum ist es ein vor dem Langbau aufgestelltes Ensemble von mindestens 23 Terrakottafigurinen (s. Abb. 3.6/1), elf Veneres, zehn Matres, einem Jugendlichen in Kapuzenmantel und einem kindlichen Risus (Kap. 3.6; s. auch Kap. 2.6.3). In den Sedimenten der zweiten Überschwemmung, der diese Anlage um 225/230 n. Chr. zum Opfer gefallen war, fanden sich wiederum zahlreiche Keramikgefässe (Kap. 3.2.4) und zum Teil wertvolle Kleinfunde wie eine Glasperle mit Goldfolie (Kap. 3.8.2) und eine Fibel aus Silber (Kap. 3.3.3), die wohl ursprünglich im Langbau untergebracht waren (Kap. 5.3.2 mit Abb. 5/7). Weitere Funde mit sicherem oder möglichem sakralem Charakter finden sich unter den 1944/1945 geborgenen Funden (s. Abb. 5/8), etwa ein silberner Fingerring mit Merkurinschrift, ein silberner Lunula-Anhänger, eine silberne Kasserolle (Kap. 3.3.3), eine Glasflasche mit Schlangenfadenauflage (Kap. 3.8.2) und einige Bergkristalle (Kap. 3.8.4). Im Bereich der Terrakotten kamen ferner mehrere Münzen (Kap. 3.7) zum Vorschein, die vielleicht dort niedergelegt worden waren. Nach der zweiten Überschwemmung errichtete man um 231 n. Chr. am Bach eine Wassermühle (Horizont 2; Kap. 2.7; Beil. 8; Abb. 2.7/49). Ob das Heiligtum auf der Insel wieder aufgebaut oder aufgelassen wurde, muss mangels Hinweisen offen bleiben. Für den abgehobenen Zuflusskanal der Wassermühle verwendete man mehrere stehen gebliebene Pfähle der vorangegangenen Anlagen der Horizonte 1a und 1b. Obwohl die Wassermühle den 28 jährlichen Überschwemmungshorizonten (Kap. 2.2) und den Funden (Kap. 4.3.2; 4.4.4; 45) zufolge nur bis um 260 n. Chr., während gut einer Generation, bestand, musste sie mindestens zweimal erneuert werden – nachgewiesen sind drei Wasserräder, drei Mühlsteinpaare und vermutlich drei Podeste, auf denen jeweils das Mahlwerk ruhte. Grund für diese Umbauten war wohl der weiche, instabile Untergrund, der zu Verschiebungen geführt hatte, so dass das Zusammenspiel von Wellbaum bzw. Sternnabe und Übersetzungsrad nicht mehr funktionierte und das ganze System zerbrach. Die Analyse von Pollen aus dem Gehhorizont hat als Mahlgut Getreide vom Weizentyp nachgewiesen (Kap. 3.11.4). Das Abzeichen eines Benefiziariers (Kap. 3.3.2 mit Abb. 3.3/23,B71) könnte dafür sprechen, dass das verarbeitete Getreide zumindest zum Teil für das römische Militär bestimmt war (s. auch Kap. 6.2.3). Ein im Horizont 2 gefundener Schreibgriffel und weitere stili sowie eine Waage für das Wägen bis zu 35-40 kg schweren Waren aus dem Fundbestand von 1944/1945 könnten davon zeugen, dass das Getreide zu wägen und zu registrieren war (Kap. 3.4.2). Kurz nach 260 n. Chr. fiel die Wassermühle einem weiteren Hochwasser zum Opfer. Für den folgenden Horizont 3 (Beil. 9) brachte man einen Kiesboden ein und errichtete ein kleines Gebäude (Kap. 2.8). Hier war wohl wiederum eine Schmiede untergebracht, wie die zahlreichen Kalottenschlacken belegen (Kap. 3.9), die im Umfeld der kleinen Baus zum Vorschein kamen. Aufgrund der Funde (Kap. 4.4.4; 4.5) kann diese Werkstatt nur kurze Zeit bestanden haben, höchstens bis um 270 n. Chr., bevor sie einem weiteren Hochwasser zum Opfer fiel. Von der jüngsten Anlage, die wohl noch in römische Zeit datiert (Horizont 4; Beil. 10), war lediglich eine Konstruktion aus grossen Steinplatten zu fassen (Kap. 2.9.1). Wozu sie diente, muss offen bleiben. Auch der geringe Fundanfall spricht dafür, dass die Nutzung des Platzes, zumindest für die römische Zeit, allmählich ein Ende fand (Kap. 4.5). Zu den jüngsten Strukturen gehören mehrere Gruben (Kap. 2.9.2), die vielleicht der Lehmentnahme dienten. Mangels Funden bleibt ihre Datierung indes ungewiss. Insbesondere wissen wir nicht, ob sie noch in römische Zeit datieren oder jünger sind. Spätestens mit der fünften Überschwemmung, die zur endgültigen Verlandung führte und wohl schon in die frühe Neuzeit zu setzen ist, wurde der Platz aufgelassen und erst mit dem Bau der bestehenden Fensterfabrik Baumgartner wieder besetzt.
Resumo:
In this work, we propose a distributed rate allocation algorithm that minimizes the average decoding delay for multimedia clients in inter-session network coding systems. We consider a scenario where the users are organized in a mesh network and each user requests the content of one of the available sources. We propose a novel distributed algorithm where network users determine the coding operations and the packet rates to be requested from the parent nodes, such that the decoding delay is minimized for all clients. A rate allocation problem is solved by every user, which seeks the rates that minimize the average decoding delay for its children and for itself. Since this optimization problem is a priori non-convex, we introduce the concept of equivalent packet flows, which permits to estimate the expected number of packets that every user needs to collect for decoding. We then decompose our original rate allocation problem into a set of convex subproblems, which are eventually combined to obtain an effective approximate solution to the delay minimization problem. The results demonstrate that the proposed scheme eliminates the bottlenecks and reduces the decoding delay experienced by users with limited bandwidth resources. We validate the performance of our distributed rate allocation algorithm in different video streaming scenarios using the NS-3 network simulator. We show that our system is able to take benefit of inter-session network coding for simultaneous delivery of video sessions in networks with path diversity.
Resumo:
Growth codes are a subclass of Rateless codes that have found interesting applications in data dissemination problems. Compared to other Rateless and conventional channel codes, Growth codes show improved intermediate performance which is particularly useful in applications where partial data presents some utility. In this paper, we investigate the asymptotic performance of Growth codes using the Wormald method, which was proposed for studying the Peeling Decoder of LDPC and LDGM codes. Compared to previous works, the Wormald differential equations are set on nodes' perspective which enables a numerical solution to the computation of the expected asymptotic decoding performance of Growth codes. Our framework is appropriate for any class of Rateless codes that does not include a precoding step. We further study the performance of Growth codes with moderate and large size codeblocks through simulations and we use the generalized logistic function to model the decoding probability. We then exploit the decoding probability model in an illustrative application of Growth codes to error resilient video transmission. The video transmission problem is cast as a joint source and channel rate allocation problem that is shown to be convex with respect to the channel rate. This illustrative application permits to highlight the main advantage of Growth codes, namely improved performance in the intermediate loss region.
Resumo:
In recent years, the econometrics literature has shown a growing interest in the study of partially identified models, in which the object of economic and statistical interest is a set rather than a point. The characterization of this set and the development of consistent estimators and inference procedures for it with desirable properties are the main goals of partial identification analysis. This review introduces the fundamental tools of the theory of random sets, which brings together elements of topology, convex geometry, and probability theory to develop a coherent mathematical framework to analyze random elements whose realizations are sets. It then elucidates how these tools have been fruitfully applied in econometrics to reach the goals of partial identification analysis.
Resumo:
Quantitative measures of polygon shapes and orientation are important elements of geospatial analysis. These kinds of measures are particularly valuable in the case of lakes, where shape and orientation patterns can help identifying the geomorphological agents behind lake formation and evolution. However, the lack of built-in tools in commercial geographic information system (GIS) software packages designed for this kind of analysis has meant that many researchers often must rely on tools and workarounds that are not always accurate. Here, an easy-to-use method to measure rectangularity R, ellipticity E, and orientation O is developed. In addition, a new rectangularity vs. ellipticity index, REi, is defined. Following a step-by-step process, it is shown how these measures and index can be easily calculated using a combination of GIS built-in functions. The identification of shapes and estimation of orientations performed by this method is applied to the case study of the geometric and oriented lakes of the Llanos de Moxos, in the Bolivian Amazon, where shape and orientation have been the two most important elements studied to infer possible formation mechanisms. It is shown that, thanks to these new indexes, shape and orientation patterns are unveiled, which would have been hard to identify otherwise.
Resumo:
Aging societies suffer from an increasing incidence of bone fractures. Bone strength depends on the amount of mineral measured by clinical densitometry, but also on the micromechanical properties of the bone hierarchical organization. A good understanding has been reached for elastic properties on several length scales, but up to now there is a lack of reliable postyield data on the lower length scales. In order to be able to describe the behavior of bone at the microscale, an anisotropic elastic-viscoplastic damage model was developed using an eccentric generalized Hill criterion and nonlinear isotropic hardening. The model was implemented as a user subroutine in Abaqus and verified using single element tests. A FE simulation of microindentation in lamellar bone was finally performed show-ing that the new constitutive model can capture the main characteristics of the indentation response of bone. As the generalized Hill criterion is limited to elliptical and cylindrical yield surfaces and the correct shape for bone is not known, a new yield surface was developed that takes any convex quadratic shape. The main advantage is that in the case of material identification the shape of the yield surface does not have to be anticipated but a minimization results in the optimal shape among all convex quadrics. The generality of the formulation was demonstrated by showing its degeneration to classical yield surfaces. Also, existing yield criteria for bone at multiple length scales were converted to the quadric formulation. Then, a computational study to determine the influence of yield surface shape and damage on the in-dentation response of bone using spherical and conical tips was performed. The constitutive model was adapted to the quadric criterion and yield surface shape and critical damage were varied. They were shown to have a major impact on the indentation curves. Their influence on indentation modulus, hardness, their ratio as well as the elastic to total work ratio were found to be very well described by multilinear regressions for both tip shapes. For conical tips, indentation depth was not a significant fac-tor, while for spherical tips damage was insignificant. All inverse methods based on microindentation suffer from a lack of uniqueness of the found material properties in the case of nonlinear material behavior. Therefore, monotonic and cyclic micropillar com-pression tests in a scanning electron microscope allowing a straightforward interpretation comple-mented by microindentation and macroscopic uniaxial compression tests were performed on dry ovine bone to identify modulus, yield stress, plastic deformation, damage accumulation and failure mecha-nisms. While the elastic properties were highly consistent, the postyield deformation and failure mech-anisms differed between the two length scales. A majority of the micropillars showed a ductile behavior with strain hardening until failure by localization in a slip plane, while the macroscopic samples failed in a quasi-brittle fashion with microcracks coalescing into macroscopic failure surfaces. In agreement with a proposed rheological model, these experiments illustrate a transition from a ductile mechanical behavior of bone at the microscale to a quasi-brittle response driven by the growth of preexisting cracks along interfaces or in the vicinity of pores at the macroscale. Subsequently, a study was undertaken to quantify the topological variability of indentations in bone and examine its relationship with mechanical properties. Indentations were performed in dry human and ovine bone in axial and transverse directions and their topography measured by AFM. Statistical shape modeling of the residual imprint allowed to define a mean shape and describe the variability with 21 principal components related to imprint depth, surface curvature and roughness. The indentation profile of bone was highly consistent and free of any pile up. A few of the topological parameters, in particular depth, showed significant correlations to variations in mechanical properties, but the cor-relations were not very strong or consistent. We could thus verify that bone is rather homogeneous in its micromechanical properties and that indentation results are not strongly influenced by small de-viations from the ideal case. As the uniaxial properties measured by micropillar compression are in conflict with the current literature on bone indentation, another dissipative mechanism has to be present. The elastic-viscoplastic damage model was therefore extended to viscoelasticity. The viscoelastic properties were identified from macroscopic experiments, while the quasistatic postelastic properties were extracted from micropillar data. It was found that viscoelasticity governed by macroscale properties has very little influence on the indentation curve and results in a clear underestimation of the creep deformation. Adding viscoplasticity leads to increased creep, but hardness is still highly overestimated. It was possible to obtain a reasonable fit with experimental indentation curves for both Berkovich and spherical indenta-tion when abandoning the assumption of shear strength being governed by an isotropy condition. These results remain to be verified by independent tests probing the micromechanical strength prop-erties in tension and shear. In conclusion, in this thesis several tools were developed to describe the complex behavior of bone on the microscale and experiments were performed to identify its material properties. Micropillar com-pression highlighted a size effect in bone due to the presence of preexisting cracks and pores or inter-faces like cement lines. It was possible to get a reasonable fit between experimental indentation curves using different tips and simulations using the constitutive model and uniaxial properties measured by micropillar compression. Additional experimental work is necessary to identify the exact nature of the size effect and the mechanical role of interfaces in bone. Deciphering the micromechanical behavior of lamellar bone and its evolution with age, disease and treatment and its failure mechanisms on several length scales will help preventing fractures in the elderly in the future.
Resumo:
Aims. Approach observations with the Optical, Spectroscopic, and Infrared Remote Imaging System (OSIRIS) experiment onboard Rosetta are used to determine the rotation period, the direction of the spin axis, and the state of rotation of comet 67P’s nucleus. Methods. Photometric time series of 67P have been acquired by OSIRIS since the post wake-up commissioning of the payload in March 2014. Fourier analysis and convex shape inversion methods have been applied to the Rosetta data as well to the available ground-based observations. Results. Evidence is found that the rotation rate of 67P has significantly changed near the time of its 2009 perihelion passage, probably due to sublimation-induced torque. We find that the sidereal rotation periods P1 = 12.76129 ± 0.00005 h and P2 = 12.4043 ± 0.0007 h for the apparitions before and after the 2009 perihelion, respectively, provide the best fit to the observations. No signs of multiple periodicity are found in the light curves down to the noise level, which implies that the comet is presently in a simple rotation state around its axis of largest moment of inertia. We derive a prograde rotation model with spin vector J2000 ecliptic coordinates λ = 65° ± 15°, β = + 59° ± 15°, corresponding to equatorial coordinates RA = 22°, Dec = + 76°. However, we find that the mirror solution, also prograde, at λ = 275° ± 15°, β = + 50° ± 15° (or RA = 274°, Dec = + 27°), is also possible at the same confidence level, due to the intrinsic ambiguity of the photometric problem for observations performed close to the ecliptic plane.
Resumo:
In this work we devise two novel algorithms for blind deconvolution based on a family of logarithmic image priors. In contrast to recent approaches, we consider a minimalistic formulation of the blind deconvolution problem where there are only two energy terms: a least-squares term for the data fidelity and an image prior based on a lower-bounded logarithm of the norm of the image gradients. We show that this energy formulation is sufficient to achieve the state of the art in blind deconvolution with a good margin over previous methods. Much of the performance is due to the chosen prior. On the one hand, this prior is very effective in favoring sparsity of the image gradients. On the other hand, this prior is non convex. Therefore, solutions that can deal effectively with local minima of the energy become necessary. We devise two iterative minimization algorithms that at each iteration solve convex problems: one obtained via the primal-dual approach and one via majorization-minimization. While the former is computationally efficient, the latter achieves state-of-the-art performance on a public dataset.
Resumo:
In this paper, we propose a new method for fully-automatic landmark detection and shape segmentation in X-ray images. To detect landmarks, we estimate the displacements from some randomly sampled image patches to the (unknown) landmark positions, and then we integrate these predictions via a voting scheme. Our key contribution is a new algorithm for estimating these displacements. Different from other methods where each image patch independently predicts its displacement, we jointly estimate the displacements from all patches together in a data driven way, by considering not only the training data but also geometric constraints on the test image. The displacements estimation is formulated as a convex optimization problem that can be solved efficiently. Finally, we use the sparse shape composition model as the a priori information to regularize the landmark positions and thus generate the segmented shape contour. We validate our method on X-ray image datasets of three different anatomical structures: complete femur, proximal femur and pelvis. Experiments show that our method is accurate and robust in landmark detection, and, combined with the shape model, gives a better or comparable performance in shape segmentation compared to state-of-the art methods. Finally, a preliminary study using CT data shows the extensibility of our method to 3D data.
Resumo:
Let Y be a stochastic process on [0,1] satisfying dY(t)=n 1/2 f(t)dt+dW(t) , where n≥1 is a given scale parameter (`sample size'), W is standard Brownian motion and f is an unknown function. Utilizing suitable multiscale tests, we construct confidence bands for f with guaranteed given coverage probability, assuming that f is isotonic or convex. These confidence bands are computationally feasible and shown to be asymptotically sharp optimal in an appropriate sense.
Resumo:
We explore a generalisation of the L´evy fractional Brownian field on the Euclidean space based on replacing the Euclidean norm with another norm. A characterisation result for admissible norms yields a complete description of all self-similar Gaussian random fields with stationary increments. Several integral representations of the introduced random fields are derived. In a similar vein, several non-Euclidean variants of the fractional Poisson field are introduced and it is shown that they share the covariance structure with the fractional Brownian field and converge to it. The shape parameters of the Poisson and Brownian variants are related by convex geometry transforms, namely the radial pth mean body and the polar projection transforms.
Resumo:
In the present contribution, we characterise law determined convex risk measures that have convex level sets at the level of distributions. By relaxing the assumptions in Weber (Math. Finance 16:419–441, 2006), we show that these risk measures can be identified with a class of generalised shortfall risk measures. As a direct consequence, we are able to extend the results in Ziegel (Math. Finance, 2014, http://onlinelibrary.wiley.com/doi/10.1111/mafi.12080/abstract) and Bellini and Bignozzi (Quant. Finance 15:725–733, 2014) on convex elicitable risk measures and confirm that expectiles are the only elicitable coherent risk measures. Further, we provide a simple characterisation of robustness for convex risk measures in terms of a weak notion of mixture continuity.
Resumo:
In this paper we solve a problem raised by Gutiérrez and Montanari about comparison principles for H−convex functions on subdomains of Heisenberg groups. Our approach is based on the notion of the sub-Riemannian horizontal normal mapping and uses degree theory for set-valued maps. The statement of the comparison principle combined with a Harnack inequality is applied to prove the Aleksandrov-type maximum principle, describing the correct boundary behavior of continuous H−convex functions vanishing at the boundary of horizontally bounded subdomains of Heisenberg groups. This result answers a question by Garofalo and Tournier. The sharpness of our results are illustrated by examples.
Resumo:
At the mid-latitudes of Utopia Planitia (UP), Mars, a suite of spatially-associated landforms exhibit geomorphological traits that, on Earth, would be consistent with periglacial processes and the possible freeze-thaw cycling of water. The suite comprises small-sized polygonally-patterned ground, polygon-junction and -margin pits, and scalloped, rimless depressions. Typically, the landforms incise a dark-toned terrain that is thought to be ice-rich. Here, we investigate the dark-toned terrain by using high resolution images from the HiRISE as well as near-infrared spectral-data from the OMEGA and CRISM. The terrain displays erosional characteristics consistent with a sedimentary nature and near-infrared spectra characterised by a blue slope similar to that of weathered basaltic-tephra. We also describe volcanic terrain that is dark-toned and periglacially-modified in the Kamchatka mountain-range of eastern Russia. The terrain is characterised by weathered tephra inter-bedded with snow, ice-wedge polygons and near-surface excess ice. The excess ice forms in the pore space of the tephra as the result of snow-melt infiltration and, subsequently, in-situ freezing. Based on this possible analogue, we construct a three-stage mechanism that explains the possible ice-enrichment of a broad expanse of dark-toned terrain at the mid-latitudes of UP: (1) the dark-toned terrain accumulates and forms via the regional deposition of sediments sourced from explosive volcanism; (2) the volcanic sediments are blanketed by atmospherically-precipitated (H2O) snow, ice or an admixture of the two, either concurrent with the volcanic-events or between discrete events; and, (3) under the influence of high obliquity or explosive volcanism, boundary conditions tolerant of thaw evolve and this, in turn, permits the migration, cycling and eventual formation of excess ice in the volcanic sediments. Over time, and through episodic iterations of this scenario, excess ice forms to decametres of depth. (C) 2015 Elsevier B.V. All rights reserved.