54 resultados para Convex Arcs
Resumo:
We propose a new method for fully-automatic landmark detection and shape segmentation in X-ray images. Our algorithm works by estimating the displacements from image patches to the (unknown) landmark positions and then integrating them via voting. The fundamental contribution is that, we jointly estimate the displacements from all patches to multiple landmarks together, by considering not only the training data but also geometric constraints on the test image. The various constraints constitute a convex objective function that can be solved efficiently. Validated on three challenging datasets, our method achieves high accuracy in landmark detection, and, combined with statistical shape model, gives a better performance in shape segmentation compared to the state-of-the-art methods.
Resumo:
We consider the problem of fitting a union of subspaces to a collection of data points drawn from one or more subspaces and corrupted by noise and/or gross errors. We pose this problem as a non-convex optimization problem, where the goal is to decompose the corrupted data matrix as the sum of a clean and self-expressive dictionary plus a matrix of noise and/or gross errors. By self-expressive we mean a dictionary whose atoms can be expressed as linear combinations of themselves with low-rank coefficients. In the case of noisy data, our key contribution is to show that this non-convex matrix decomposition problem can be solved in closed form from the SVD of the noisy data matrix. The solution involves a novel polynomial thresholding operator on the singular values of the data matrix, which requires minimal shrinkage. For one subspace, a particular case of our framework leads to classical PCA, which requires no shrinkage. For multiple subspaces, the low-rank coefficients obtained by our framework can be used to construct a data affinity matrix from which the clustering of the data according to the subspaces can be obtained by spectral clustering. In the case of data corrupted by gross errors, we solve the problem using an alternating minimization approach, which combines our polynomial thresholding operator with the more traditional shrinkage-thresholding operator. Experiments on motion segmentation and face clustering show that our framework performs on par with state-of-the-art techniques at a reduced computational cost.
Resumo:
Objectives: We compare the dose parameters between 3 different radiosurgery delivery techniques which may have an impact on cochlea function. Methods: Five patients with unilateral vestibular schwannoma (VS) were selected for this study. Planning procedure was carried out using the BrainLAB® iPlan planning system v. 4.5. For each patient three different planning techniques were used: dynamic arc (DA) with 5 arcs per plan, hybrid arc (HA) with 5 arcs per plan and IMRT with 8 fields per plan. For each technique, two plans were generated with different methods: with the first method (PTV coverage) it was the goal to fully cover the PTV with at least 12 Gy (normalization: 12 Gy covered 99% of the PTV) and with the second method (cochlea sparing) it was the goal to spare the cochlea (normalization: 12 Gy covers 50% of the PTV/V4Gy of cochlea lower than 1%). Plan evaluation was done considering target volume and coverage (conformity and homogeneity) and OAR constraints (mean (Dmean) and maximum dose (Dmax) to cochlea, Dmax to brainstem and cochlea). The total number of monitor units (MU) was analyzed. Results: The median tumor volume was 0.95 cm³ (range, 0.86-3 cm³). The median PTV was 1.44 cm³ (range, 1-3.5 cm³). The median distance between the tumor and the cochlea's modiulus was 2.7 mm (range, 1.8-6.3 mm). For the PTV coverage method, when we compared the cochlear dose in VS patients planned with DA, HA and IMRT, there were no significant differences in Dmax (p = 0.872) and in Dmean (p= 0.860). We found a significant correlation (p< 0.05) between the target volume and the cochlear Dmean for all plans with Pearson's coefficient correlation of 0.90, 0.92 and 0.94 for the DA, HA and IMRT techniques, respectively. For the cochlea sparing method, when we compared the cochlear dose in VS patients planned with DA, HA and IMRT, there were no significant differences in Dmax (p = 0.310) and in Dmean (p= 0.275). However, in this group the V4Gy of the ipsilateral cochlea represents less than 1%. When using the HA or IMRT technique, the homogeneity and conformity in the PTV, but also the number of MUs were increased in comparison to the DA technique. Conclusion: VS tumors that extend distally into the IAC had an equivalent sparing of cochlea with DA approach compared with the HA and IMRT techniques. Disclosure: No significant relationships.
Resumo:
Spiders have one pair of venom glands, and only a few families have reduced them completely (Uloboridae, Holarchaeidae) or modified them to another function (Symphytognathidae or Scytodidae, see Suter and Stratton 2013). All other 42,000 known spider species (99%) utilize their venom to inject it into prey items, which subsequently become paralysed or are killed. Spider venom is a complex mixture of hundreds of components, many of them interacting with cell membranes or receptors located mainly in the nervous or muscular system (Herzig and King 2013). Spider venom, as it is today, has a 300-million-yearlong history of evolution and adaptation and can be considered as an optimized tool to subdue prey. In Mesothelae, the oldest spider group with less than 100 species, the venom glands lie in the anterior part of the cheliceral basal segment. They are very small and do not support the predation process very effectively. In Mygalomorphae, the venom glands are well developed and fill the basal cheliceral segment more or less completely. Many of these 3,000 species are medium- to large-/very large-sized spiders, and they have created the image of being dangerous beasts, attacking and killing a variety of animals, including humans. Although this picture is completely wrong, it is persistent and contributes considerably to human arachnophobia. The third group of spiders, Araneomorphae or “modern spiders”, comprises 93% of all spider species. The venom glands are enlarged and extend to the prosoma; the openings of the venom ducts are moved from the convex to the concave side of the cheliceral fangs and enlarged as well. These changes save the chelicerae from the necessity of being large, and hence, on the average, araneomorph spiders are much smaller than mygalomorphs. Nevertheless, they possess relatively large venom glands, situated mainly in the prosoma, and may also have rather potent venom.
Resumo:
An eye examining instrument comprises a projection device and a concave screen. The eye examining instrument furthermore comprises a convex reflector, wherein an image can be projected by the projection device onto the convex reflector and reflected by the convex reflector onto the concave screen.
Resumo:
OBJECTIVE Marked differences exist between human knee and ankle joints regarding risks and progression of osteoarthritis (OA). Pathomechanisms of degenerative joint disease may therefore differ in these joints, due to differences in tissue structure and function. Focussing on structural issues which are design goals for tissue engineering, we compared cell and matrix morphologies in different anatomical sites of adult human knee and ankle joints. METHODS Osteochondral explants were acquired from knee and ankle joints of deceased persons aged 20 to 40 years and analyzed for cell, matrix and tissue morphology using confocal and electron microscopy and unbiased stereological methods. Variations associated with joint (knee versus ankle) and biomechanical role (convex versus concave articular surfaces) were identified by 2-way analysis of variance and post-hoc analysis. RESULTS Knee cartilage exhibited higher cell densities in the superficial zone than ankle cartilage. In the transitional zone, higher cell densities were observed in association with convex versus concave articular surfaces, without significant differences between knee and ankle cartilage. Highly uniform cell and matrix morphologies were evident throughout the radial zone in the knee and ankle, regardless of tissue biomechanical role. Throughout the knee and ankle cartilage sampled, chondron density was remarkably constant at approximately 4.2×10(6) chondrons/cm(3). CONCLUSION Variation of cartilage cell and matrix morphologies with changing joint and biomechanical environments suggests that tissue structural adaptations are performed primarily by the superficial and transitional zones. Data may aid the development of site-specific cartilage tissue engineering, and help identify conditions where OA is likely to occur.
Resumo:
1944/1945 wurde in Cham-Hagendorn eine Wassermühle ausgegraben, die dank ihrer aussergewöhnlich guten Holzerhaltung seit langem einen prominenten Platz in der Forschung einnimmt. 2003 und 2004 konnte die Kantonsarchäologie Zug den Platz erneut archäologisch untersuchen. Dabei wurden nicht nur weitere Reste der Wassermühle, sondern auch Spuren älterer und jüngerer Anlagen geborgen: eine ältere und eine jüngere Schmiedewerkstatt (Horizont 1a/Horizont 3) sowie ein zweiphasiges Heiligtum (Horizonte 1a/1b). All diese Anlagen lassen sich nun in das in den neuen Grabungen erkannte stratigraphische Gerüst einhängen (s. Beil. 2). Dank der Holzerhaltung können die meisten Phasen dendrochronologisch datiert werden (s. Abb. 4.1/1a): Horizont 1a mit Schlagdaten zwischen 162(?)/173 und 200 n. Chr., Horizont 1b um 215/218 n. Chr. und Horizont 2 um 231 n. Chr. Ferner konnten in den neuen Grabungen Proben für mikromorphologische und archäobotanische Untersuchungen entnommen werden (Kap. 2.2; 3.11). In der vorliegenden Publikation werden der Befund und die Baustrukturen vorgelegt, (Kap. 2), desgleichen sämtliche stratifizierten Funde und eine umfassende Auswahl der 1944/1945 geborgenen Funde (Kap. 3). Dank anpassender Fragmente, sog. Passscherben, lassen sich diese zum Teil nachträglich in die Schichtenabfolge einbinden. Die mikromorphologischen und die archäobotanischen Untersuchungen (Kap. 2.2; 3.11) zeigen, dass der Fundplatz in römischer Zeit inmitten einer stark vom Wald und dem Fluss Lorze geprägten Landschaft lag. In unmittelbarer Nähe können weder eine Siedlung noch einzelne Wohnbauten gelegen haben. Die demnach nur gewerblich und sakral genutzten Anlagen standen an einem Bach, der vermutlich mit jenem Bach identisch ist, der noch heute das Groppenmoos entwässert und bei Cham-Hagendorn in die Lorze mündet (s. Abb. 2.4/1). Der antike Bach führte wiederholt Hochwasser ─ insgesamt sind fünf grössere Überschwemmungsphasen auszumachen (Kap. 2.2; 2.4). Wohl anlässlich eines Seehochstandes durch ein Überschwappen der Lorze in den Bach ausgelöst, müssen diese Überschwemmungen eine enorme Gewalt entwickelt haben, der die einzelnen Anlagen zum Opfer fielen. Wie die Untersuchung der Siedlungslandschaft römischer Zeit rund um den Zugersee wahrscheinlich macht (Kap. 6 mit Abb. 6.2/2), dürften die Anlagen von Cham-Hagendorn zu einer in Cham-Heiligkreuz vermuteten Villa gehören, einem von fünf grösseren Landgütern in diesem Gebiet. Hinweise auf Vorgängeranlagen fehlen, mit denen die vereinzelten Funde des 1. Jh. n. Chr. (Kap. 4.5) in Verbindung gebracht werden könnten. Diese dürften eher von einer der Überschwemmungen bachaufwärts weggerissen und nach Cham-Hagendorn eingeschwemmt worden sein. Die Nutzung des Fundplatzes (Horizont 1a; s. Beil. 6) setzte um 170 n. Chr. mit einer Schmiedewerkstatt ein (Kap. 2.5.1). Der Fundanfall, insbesondere die Schmiedeschlacken (Kap. 3.9) belegen, dass hier nur hin und wieder Geräte hergestellt und repariert wurden (Kap. 5.2). Diese Werkstatt war vermutlich schon aufgelassen und dem Verfall preisgegeben, als man 200 n. Chr. (Kap. 4.2.4) auf einer Insel zwischen dem Bach und einem Lorzearm ein Heiligtum errichtete (Kap. 5.3). Beleg für den sakralen Status dieser Insel ist in erster Linie mindestens ein eigens gepflanzter Pfirsichbaum, nachgewiesen mit Pollen, einem Holz und über 400 Pfirsichsteinen (Kap. 3.11). Die im Bach verlaufende Grenze zwischen dem sakralen Platz und der profanen Umgebung markierte man zusätzlich mit einer Pfahlreihe (Kap. 2.5.3). In diese war ein schmaler Langbau integriert (Kap. 2.5.2), der an die oft an Temenosmauern antiker Heiligtümer angebauten Portiken erinnert und wohl auch die gleiche Funktion wie diese gehabt hatte, nämlich das Aufbewahren von Weihegaben und Kultgerät (Kap. 5.3). Das reiche Fundmaterial, das sich in den Schichten der ersten Überschwemmung fand (s. Abb. 5./5), die um 205/210 n. Chr. dieses Heiligtum zerstört hatte, insbesondere die zahlreiche Keramik (Kap. 3.2.4), und die zum Teil auffallend wertvollen Kleinfunde (Kap. 3.3.3), dürften zum grössten Teil einst in diesem Langbau untergebracht gewesen sein. Ein als Glockenklöppel interpretiertes, stratifiziertes Objekt spricht dafür, dass die fünf grossen, 1944/1945 als Stapel aufgefundenen Eisenglocken vielleicht auch dem Heiligtum zuzuweisen sind (Kap. 3.4). In diesen Kontext passen zudem die überdurchschnittlich häufig kalzinierten Tierknochen (Kap. 3.10). Nach der Überschwemmung befestigte man für 215 n. Chr. (Kap. 4.2.4) das unterspülte Bachufer mit einer Uferverbauung (Kap. 2.6.1). Mit dem Bau eines weiteren, im Bach stehenden Langbaus (Kap. 2.6.2) stellte man 218 n. Chr. das Heiligtum auf der Insel in ähnlicher Form wieder her (Horizont 1b; s. Beil. 7). Von der Pfahlreihe, die wiederum die sakrale Insel von der profanen Umgebung abgrenzte, blieben indes nur wenige Pfähle erhalten. Dennoch ist der sakrale Charakter der Anlage gesichert. Ausser dem immer noch blühenden Pfirsichbaum ist es ein vor dem Langbau aufgestelltes Ensemble von mindestens 23 Terrakottafigurinen (s. Abb. 3.6/1), elf Veneres, zehn Matres, einem Jugendlichen in Kapuzenmantel und einem kindlichen Risus (Kap. 3.6; s. auch Kap. 2.6.3). In den Sedimenten der zweiten Überschwemmung, der diese Anlage um 225/230 n. Chr. zum Opfer gefallen war, fanden sich wiederum zahlreiche Keramikgefässe (Kap. 3.2.4) und zum Teil wertvolle Kleinfunde wie eine Glasperle mit Goldfolie (Kap. 3.8.2) und eine Fibel aus Silber (Kap. 3.3.3), die wohl ursprünglich im Langbau untergebracht waren (Kap. 5.3.2 mit Abb. 5/7). Weitere Funde mit sicherem oder möglichem sakralem Charakter finden sich unter den 1944/1945 geborgenen Funden (s. Abb. 5/8), etwa ein silberner Fingerring mit Merkurinschrift, ein silberner Lunula-Anhänger, eine silberne Kasserolle (Kap. 3.3.3), eine Glasflasche mit Schlangenfadenauflage (Kap. 3.8.2) und einige Bergkristalle (Kap. 3.8.4). Im Bereich der Terrakotten kamen ferner mehrere Münzen (Kap. 3.7) zum Vorschein, die vielleicht dort niedergelegt worden waren. Nach der zweiten Überschwemmung errichtete man um 231 n. Chr. am Bach eine Wassermühle (Horizont 2; Kap. 2.7; Beil. 8; Abb. 2.7/49). Ob das Heiligtum auf der Insel wieder aufgebaut oder aufgelassen wurde, muss mangels Hinweisen offen bleiben. Für den abgehobenen Zuflusskanal der Wassermühle verwendete man mehrere stehen gebliebene Pfähle der vorangegangenen Anlagen der Horizonte 1a und 1b. Obwohl die Wassermühle den 28 jährlichen Überschwemmungshorizonten (Kap. 2.2) und den Funden (Kap. 4.3.2; 4.4.4; 45) zufolge nur bis um 260 n. Chr., während gut einer Generation, bestand, musste sie mindestens zweimal erneuert werden – nachgewiesen sind drei Wasserräder, drei Mühlsteinpaare und vermutlich drei Podeste, auf denen jeweils das Mahlwerk ruhte. Grund für diese Umbauten war wohl der weiche, instabile Untergrund, der zu Verschiebungen geführt hatte, so dass das Zusammenspiel von Wellbaum bzw. Sternnabe und Übersetzungsrad nicht mehr funktionierte und das ganze System zerbrach. Die Analyse von Pollen aus dem Gehhorizont hat als Mahlgut Getreide vom Weizentyp nachgewiesen (Kap. 3.11.4). Das Abzeichen eines Benefiziariers (Kap. 3.3.2 mit Abb. 3.3/23,B71) könnte dafür sprechen, dass das verarbeitete Getreide zumindest zum Teil für das römische Militär bestimmt war (s. auch Kap. 6.2.3). Ein im Horizont 2 gefundener Schreibgriffel und weitere stili sowie eine Waage für das Wägen bis zu 35-40 kg schweren Waren aus dem Fundbestand von 1944/1945 könnten davon zeugen, dass das Getreide zu wägen und zu registrieren war (Kap. 3.4.2). Kurz nach 260 n. Chr. fiel die Wassermühle einem weiteren Hochwasser zum Opfer. Für den folgenden Horizont 3 (Beil. 9) brachte man einen Kiesboden ein und errichtete ein kleines Gebäude (Kap. 2.8). Hier war wohl wiederum eine Schmiede untergebracht, wie die zahlreichen Kalottenschlacken belegen (Kap. 3.9), die im Umfeld der kleinen Baus zum Vorschein kamen. Aufgrund der Funde (Kap. 4.4.4; 4.5) kann diese Werkstatt nur kurze Zeit bestanden haben, höchstens bis um 270 n. Chr., bevor sie einem weiteren Hochwasser zum Opfer fiel. Von der jüngsten Anlage, die wohl noch in römische Zeit datiert (Horizont 4; Beil. 10), war lediglich eine Konstruktion aus grossen Steinplatten zu fassen (Kap. 2.9.1). Wozu sie diente, muss offen bleiben. Auch der geringe Fundanfall spricht dafür, dass die Nutzung des Platzes, zumindest für die römische Zeit, allmählich ein Ende fand (Kap. 4.5). Zu den jüngsten Strukturen gehören mehrere Gruben (Kap. 2.9.2), die vielleicht der Lehmentnahme dienten. Mangels Funden bleibt ihre Datierung indes ungewiss. Insbesondere wissen wir nicht, ob sie noch in römische Zeit datieren oder jünger sind. Spätestens mit der fünften Überschwemmung, die zur endgültigen Verlandung führte und wohl schon in die frühe Neuzeit zu setzen ist, wurde der Platz aufgelassen und erst mit dem Bau der bestehenden Fensterfabrik Baumgartner wieder besetzt.
Resumo:
In this work, we propose a distributed rate allocation algorithm that minimizes the average decoding delay for multimedia clients in inter-session network coding systems. We consider a scenario where the users are organized in a mesh network and each user requests the content of one of the available sources. We propose a novel distributed algorithm where network users determine the coding operations and the packet rates to be requested from the parent nodes, such that the decoding delay is minimized for all clients. A rate allocation problem is solved by every user, which seeks the rates that minimize the average decoding delay for its children and for itself. Since this optimization problem is a priori non-convex, we introduce the concept of equivalent packet flows, which permits to estimate the expected number of packets that every user needs to collect for decoding. We then decompose our original rate allocation problem into a set of convex subproblems, which are eventually combined to obtain an effective approximate solution to the delay minimization problem. The results demonstrate that the proposed scheme eliminates the bottlenecks and reduces the decoding delay experienced by users with limited bandwidth resources. We validate the performance of our distributed rate allocation algorithm in different video streaming scenarios using the NS-3 network simulator. We show that our system is able to take benefit of inter-session network coding for simultaneous delivery of video sessions in networks with path diversity.
Resumo:
Growth codes are a subclass of Rateless codes that have found interesting applications in data dissemination problems. Compared to other Rateless and conventional channel codes, Growth codes show improved intermediate performance which is particularly useful in applications where partial data presents some utility. In this paper, we investigate the asymptotic performance of Growth codes using the Wormald method, which was proposed for studying the Peeling Decoder of LDPC and LDGM codes. Compared to previous works, the Wormald differential equations are set on nodes' perspective which enables a numerical solution to the computation of the expected asymptotic decoding performance of Growth codes. Our framework is appropriate for any class of Rateless codes that does not include a precoding step. We further study the performance of Growth codes with moderate and large size codeblocks through simulations and we use the generalized logistic function to model the decoding probability. We then exploit the decoding probability model in an illustrative application of Growth codes to error resilient video transmission. The video transmission problem is cast as a joint source and channel rate allocation problem that is shown to be convex with respect to the channel rate. This illustrative application permits to highlight the main advantage of Growth codes, namely improved performance in the intermediate loss region.
Resumo:
In recent years, the econometrics literature has shown a growing interest in the study of partially identified models, in which the object of economic and statistical interest is a set rather than a point. The characterization of this set and the development of consistent estimators and inference procedures for it with desirable properties are the main goals of partial identification analysis. This review introduces the fundamental tools of the theory of random sets, which brings together elements of topology, convex geometry, and probability theory to develop a coherent mathematical framework to analyze random elements whose realizations are sets. It then elucidates how these tools have been fruitfully applied in econometrics to reach the goals of partial identification analysis.
Resumo:
Aging societies suffer from an increasing incidence of bone fractures. Bone strength depends on the amount of mineral measured by clinical densitometry, but also on the micromechanical properties of the bone hierarchical organization. A good understanding has been reached for elastic properties on several length scales, but up to now there is a lack of reliable postyield data on the lower length scales. In order to be able to describe the behavior of bone at the microscale, an anisotropic elastic-viscoplastic damage model was developed using an eccentric generalized Hill criterion and nonlinear isotropic hardening. The model was implemented as a user subroutine in Abaqus and verified using single element tests. A FE simulation of microindentation in lamellar bone was finally performed show-ing that the new constitutive model can capture the main characteristics of the indentation response of bone. As the generalized Hill criterion is limited to elliptical and cylindrical yield surfaces and the correct shape for bone is not known, a new yield surface was developed that takes any convex quadratic shape. The main advantage is that in the case of material identification the shape of the yield surface does not have to be anticipated but a minimization results in the optimal shape among all convex quadrics. The generality of the formulation was demonstrated by showing its degeneration to classical yield surfaces. Also, existing yield criteria for bone at multiple length scales were converted to the quadric formulation. Then, a computational study to determine the influence of yield surface shape and damage on the in-dentation response of bone using spherical and conical tips was performed. The constitutive model was adapted to the quadric criterion and yield surface shape and critical damage were varied. They were shown to have a major impact on the indentation curves. Their influence on indentation modulus, hardness, their ratio as well as the elastic to total work ratio were found to be very well described by multilinear regressions for both tip shapes. For conical tips, indentation depth was not a significant fac-tor, while for spherical tips damage was insignificant. All inverse methods based on microindentation suffer from a lack of uniqueness of the found material properties in the case of nonlinear material behavior. Therefore, monotonic and cyclic micropillar com-pression tests in a scanning electron microscope allowing a straightforward interpretation comple-mented by microindentation and macroscopic uniaxial compression tests were performed on dry ovine bone to identify modulus, yield stress, plastic deformation, damage accumulation and failure mecha-nisms. While the elastic properties were highly consistent, the postyield deformation and failure mech-anisms differed between the two length scales. A majority of the micropillars showed a ductile behavior with strain hardening until failure by localization in a slip plane, while the macroscopic samples failed in a quasi-brittle fashion with microcracks coalescing into macroscopic failure surfaces. In agreement with a proposed rheological model, these experiments illustrate a transition from a ductile mechanical behavior of bone at the microscale to a quasi-brittle response driven by the growth of preexisting cracks along interfaces or in the vicinity of pores at the macroscale. Subsequently, a study was undertaken to quantify the topological variability of indentations in bone and examine its relationship with mechanical properties. Indentations were performed in dry human and ovine bone in axial and transverse directions and their topography measured by AFM. Statistical shape modeling of the residual imprint allowed to define a mean shape and describe the variability with 21 principal components related to imprint depth, surface curvature and roughness. The indentation profile of bone was highly consistent and free of any pile up. A few of the topological parameters, in particular depth, showed significant correlations to variations in mechanical properties, but the cor-relations were not very strong or consistent. We could thus verify that bone is rather homogeneous in its micromechanical properties and that indentation results are not strongly influenced by small de-viations from the ideal case. As the uniaxial properties measured by micropillar compression are in conflict with the current literature on bone indentation, another dissipative mechanism has to be present. The elastic-viscoplastic damage model was therefore extended to viscoelasticity. The viscoelastic properties were identified from macroscopic experiments, while the quasistatic postelastic properties were extracted from micropillar data. It was found that viscoelasticity governed by macroscale properties has very little influence on the indentation curve and results in a clear underestimation of the creep deformation. Adding viscoplasticity leads to increased creep, but hardness is still highly overestimated. It was possible to obtain a reasonable fit with experimental indentation curves for both Berkovich and spherical indenta-tion when abandoning the assumption of shear strength being governed by an isotropy condition. These results remain to be verified by independent tests probing the micromechanical strength prop-erties in tension and shear. In conclusion, in this thesis several tools were developed to describe the complex behavior of bone on the microscale and experiments were performed to identify its material properties. Micropillar com-pression highlighted a size effect in bone due to the presence of preexisting cracks and pores or inter-faces like cement lines. It was possible to get a reasonable fit between experimental indentation curves using different tips and simulations using the constitutive model and uniaxial properties measured by micropillar compression. Additional experimental work is necessary to identify the exact nature of the size effect and the mechanical role of interfaces in bone. Deciphering the micromechanical behavior of lamellar bone and its evolution with age, disease and treatment and its failure mechanisms on several length scales will help preventing fractures in the elderly in the future.
Resumo:
Measurement association and initial orbit determination is a fundamental task when building up a database of space objects. This paper proposes an efficient and robust method to determine the orbit using the available information of two tracklets, i.e. their line-of-sights and their derivatives. The approach works with a boundary-value formulation to represent hypothesized orbital states and uses an optimization scheme to find the best fitting orbits. The method is assessed and compared to an initial-value formulation using a measurement set taken by the Zimmerwald Small Aperture Robotic Telescope of the Astronomical Institute at the University of Bern. False associations of closely spaced objects on similar orbits cannot be completely eliminated due to the short duration of the measurement arcs. However, the presented approach uses the available information optimally and the overall association performance and robustness is very promising. The boundary-value optimization takes only around 2% of computational time when compared to optimization approaches using an initial-value formulation. The full potential of the method in terms of run-time is additionally illustrated by comparing it to other published association methods.
Resumo:
Aims. Approach observations with the Optical, Spectroscopic, and Infrared Remote Imaging System (OSIRIS) experiment onboard Rosetta are used to determine the rotation period, the direction of the spin axis, and the state of rotation of comet 67P’s nucleus. Methods. Photometric time series of 67P have been acquired by OSIRIS since the post wake-up commissioning of the payload in March 2014. Fourier analysis and convex shape inversion methods have been applied to the Rosetta data as well to the available ground-based observations. Results. Evidence is found that the rotation rate of 67P has significantly changed near the time of its 2009 perihelion passage, probably due to sublimation-induced torque. We find that the sidereal rotation periods P1 = 12.76129 ± 0.00005 h and P2 = 12.4043 ± 0.0007 h for the apparitions before and after the 2009 perihelion, respectively, provide the best fit to the observations. No signs of multiple periodicity are found in the light curves down to the noise level, which implies that the comet is presently in a simple rotation state around its axis of largest moment of inertia. We derive a prograde rotation model with spin vector J2000 ecliptic coordinates λ = 65° ± 15°, β = + 59° ± 15°, corresponding to equatorial coordinates RA = 22°, Dec = + 76°. However, we find that the mirror solution, also prograde, at λ = 275° ± 15°, β = + 50° ± 15° (or RA = 274°, Dec = + 27°), is also possible at the same confidence level, due to the intrinsic ambiguity of the photometric problem for observations performed close to the ecliptic plane.
Resumo:
In this work we devise two novel algorithms for blind deconvolution based on a family of logarithmic image priors. In contrast to recent approaches, we consider a minimalistic formulation of the blind deconvolution problem where there are only two energy terms: a least-squares term for the data fidelity and an image prior based on a lower-bounded logarithm of the norm of the image gradients. We show that this energy formulation is sufficient to achieve the state of the art in blind deconvolution with a good margin over previous methods. Much of the performance is due to the chosen prior. On the one hand, this prior is very effective in favoring sparsity of the image gradients. On the other hand, this prior is non convex. Therefore, solutions that can deal effectively with local minima of the energy become necessary. We devise two iterative minimization algorithms that at each iteration solve convex problems: one obtained via the primal-dual approach and one via majorization-minimization. While the former is computationally efficient, the latter achieves state-of-the-art performance on a public dataset.
Resumo:
In this paper, we propose a new method for fully-automatic landmark detection and shape segmentation in X-ray images. To detect landmarks, we estimate the displacements from some randomly sampled image patches to the (unknown) landmark positions, and then we integrate these predictions via a voting scheme. Our key contribution is a new algorithm for estimating these displacements. Different from other methods where each image patch independently predicts its displacement, we jointly estimate the displacements from all patches together in a data driven way, by considering not only the training data but also geometric constraints on the test image. The displacements estimation is formulated as a convex optimization problem that can be solved efficiently. Finally, we use the sparse shape composition model as the a priori information to regularize the landmark positions and thus generate the segmented shape contour. We validate our method on X-ray image datasets of three different anatomical structures: complete femur, proximal femur and pelvis. Experiments show that our method is accurate and robust in landmark detection, and, combined with the shape model, gives a better or comparable performance in shape segmentation compared to state-of-the art methods. Finally, a preliminary study using CT data shows the extensibility of our method to 3D data.