948 resultados para Convex Arcs


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, we propose a distributed rate allocation algorithm that minimizes the average decoding delay for multimedia clients in inter-session network coding systems. We consider a scenario where the users are organized in a mesh network and each user requests the content of one of the available sources. We propose a novel distributed algorithm where network users determine the coding operations and the packet rates to be requested from the parent nodes, such that the decoding delay is minimized for all clients. A rate allocation problem is solved by every user, which seeks the rates that minimize the average decoding delay for its children and for itself. Since this optimization problem is a priori non-convex, we introduce the concept of equivalent packet flows, which permits to estimate the expected number of packets that every user needs to collect for decoding. We then decompose our original rate allocation problem into a set of convex subproblems, which are eventually combined to obtain an effective approximate solution to the delay minimization problem. The results demonstrate that the proposed scheme eliminates the bottlenecks and reduces the decoding delay experienced by users with limited bandwidth resources. We validate the performance of our distributed rate allocation algorithm in different video streaming scenarios using the NS-3 network simulator. We show that our system is able to take benefit of inter-session network coding for simultaneous delivery of video sessions in networks with path diversity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Growth codes are a subclass of Rateless codes that have found interesting applications in data dissemination problems. Compared to other Rateless and conventional channel codes, Growth codes show improved intermediate performance which is particularly useful in applications where partial data presents some utility. In this paper, we investigate the asymptotic performance of Growth codes using the Wormald method, which was proposed for studying the Peeling Decoder of LDPC and LDGM codes. Compared to previous works, the Wormald differential equations are set on nodes' perspective which enables a numerical solution to the computation of the expected asymptotic decoding performance of Growth codes. Our framework is appropriate for any class of Rateless codes that does not include a precoding step. We further study the performance of Growth codes with moderate and large size codeblocks through simulations and we use the generalized logistic function to model the decoding probability. We then exploit the decoding probability model in an illustrative application of Growth codes to error resilient video transmission. The video transmission problem is cast as a joint source and channel rate allocation problem that is shown to be convex with respect to the channel rate. This illustrative application permits to highlight the main advantage of Growth codes, namely improved performance in the intermediate loss region.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years, the econometrics literature has shown a growing interest in the study of partially identified models, in which the object of economic and statistical interest is a set rather than a point. The characterization of this set and the development of consistent estimators and inference procedures for it with desirable properties are the main goals of partial identification analysis. This review introduces the fundamental tools of the theory of random sets, which brings together elements of topology, convex geometry, and probability theory to develop a coherent mathematical framework to analyze random elements whose realizations are sets. It then elucidates how these tools have been fruitfully applied in econometrics to reach the goals of partial identification analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aging societies suffer from an increasing incidence of bone fractures. Bone strength depends on the amount of mineral measured by clinical densitometry, but also on the micromechanical properties of the bone hierarchical organization. A good understanding has been reached for elastic properties on several length scales, but up to now there is a lack of reliable postyield data on the lower length scales. In order to be able to describe the behavior of bone at the microscale, an anisotropic elastic-viscoplastic damage model was developed using an eccentric generalized Hill criterion and nonlinear isotropic hardening. The model was implemented as a user subroutine in Abaqus and verified using single element tests. A FE simulation of microindentation in lamellar bone was finally performed show-ing that the new constitutive model can capture the main characteristics of the indentation response of bone. As the generalized Hill criterion is limited to elliptical and cylindrical yield surfaces and the correct shape for bone is not known, a new yield surface was developed that takes any convex quadratic shape. The main advantage is that in the case of material identification the shape of the yield surface does not have to be anticipated but a minimization results in the optimal shape among all convex quadrics. The generality of the formulation was demonstrated by showing its degeneration to classical yield surfaces. Also, existing yield criteria for bone at multiple length scales were converted to the quadric formulation. Then, a computational study to determine the influence of yield surface shape and damage on the in-dentation response of bone using spherical and conical tips was performed. The constitutive model was adapted to the quadric criterion and yield surface shape and critical damage were varied. They were shown to have a major impact on the indentation curves. Their influence on indentation modulus, hardness, their ratio as well as the elastic to total work ratio were found to be very well described by multilinear regressions for both tip shapes. For conical tips, indentation depth was not a significant fac-tor, while for spherical tips damage was insignificant. All inverse methods based on microindentation suffer from a lack of uniqueness of the found material properties in the case of nonlinear material behavior. Therefore, monotonic and cyclic micropillar com-pression tests in a scanning electron microscope allowing a straightforward interpretation comple-mented by microindentation and macroscopic uniaxial compression tests were performed on dry ovine bone to identify modulus, yield stress, plastic deformation, damage accumulation and failure mecha-nisms. While the elastic properties were highly consistent, the postyield deformation and failure mech-anisms differed between the two length scales. A majority of the micropillars showed a ductile behavior with strain hardening until failure by localization in a slip plane, while the macroscopic samples failed in a quasi-brittle fashion with microcracks coalescing into macroscopic failure surfaces. In agreement with a proposed rheological model, these experiments illustrate a transition from a ductile mechanical behavior of bone at the microscale to a quasi-brittle response driven by the growth of preexisting cracks along interfaces or in the vicinity of pores at the macroscale. Subsequently, a study was undertaken to quantify the topological variability of indentations in bone and examine its relationship with mechanical properties. Indentations were performed in dry human and ovine bone in axial and transverse directions and their topography measured by AFM. Statistical shape modeling of the residual imprint allowed to define a mean shape and describe the variability with 21 principal components related to imprint depth, surface curvature and roughness. The indentation profile of bone was highly consistent and free of any pile up. A few of the topological parameters, in particular depth, showed significant correlations to variations in mechanical properties, but the cor-relations were not very strong or consistent. We could thus verify that bone is rather homogeneous in its micromechanical properties and that indentation results are not strongly influenced by small de-viations from the ideal case. As the uniaxial properties measured by micropillar compression are in conflict with the current literature on bone indentation, another dissipative mechanism has to be present. The elastic-viscoplastic damage model was therefore extended to viscoelasticity. The viscoelastic properties were identified from macroscopic experiments, while the quasistatic postelastic properties were extracted from micropillar data. It was found that viscoelasticity governed by macroscale properties has very little influence on the indentation curve and results in a clear underestimation of the creep deformation. Adding viscoplasticity leads to increased creep, but hardness is still highly overestimated. It was possible to obtain a reasonable fit with experimental indentation curves for both Berkovich and spherical indenta-tion when abandoning the assumption of shear strength being governed by an isotropy condition. These results remain to be verified by independent tests probing the micromechanical strength prop-erties in tension and shear. In conclusion, in this thesis several tools were developed to describe the complex behavior of bone on the microscale and experiments were performed to identify its material properties. Micropillar com-pression highlighted a size effect in bone due to the presence of preexisting cracks and pores or inter-faces like cement lines. It was possible to get a reasonable fit between experimental indentation curves using different tips and simulations using the constitutive model and uniaxial properties measured by micropillar compression. Additional experimental work is necessary to identify the exact nature of the size effect and the mechanical role of interfaces in bone. Deciphering the micromechanical behavior of lamellar bone and its evolution with age, disease and treatment and its failure mechanisms on several length scales will help preventing fractures in the elderly in the future.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Measurement association and initial orbit determination is a fundamental task when building up a database of space objects. This paper proposes an efficient and robust method to determine the orbit using the available information of two tracklets, i.e. their line-of-sights and their derivatives. The approach works with a boundary-value formulation to represent hypothesized orbital states and uses an optimization scheme to find the best fitting orbits. The method is assessed and compared to an initial-value formulation using a measurement set taken by the Zimmerwald Small Aperture Robotic Telescope of the Astronomical Institute at the University of Bern. False associations of closely spaced objects on similar orbits cannot be completely eliminated due to the short duration of the measurement arcs. However, the presented approach uses the available information optimally and the overall association performance and robustness is very promising. The boundary-value optimization takes only around 2% of computational time when compared to optimization approaches using an initial-value formulation. The full potential of the method in terms of run-time is additionally illustrated by comparing it to other published association methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aims. Approach observations with the Optical, Spectroscopic, and Infrared Remote Imaging System (OSIRIS) experiment onboard Rosetta are used to determine the rotation period, the direction of the spin axis, and the state of rotation of comet 67P’s nucleus. Methods. Photometric time series of 67P have been acquired by OSIRIS since the post wake-up commissioning of the payload in March 2014. Fourier analysis and convex shape inversion methods have been applied to the Rosetta data as well to the available ground-based observations. Results. Evidence is found that the rotation rate of 67P has significantly changed near the time of its 2009 perihelion passage, probably due to sublimation-induced torque. We find that the sidereal rotation periods P1 = 12.76129 ± 0.00005 h and P2 = 12.4043 ± 0.0007 h for the apparitions before and after the 2009 perihelion, respectively, provide the best fit to the observations. No signs of multiple periodicity are found in the light curves down to the noise level, which implies that the comet is presently in a simple rotation state around its axis of largest moment of inertia. We derive a prograde rotation model with spin vector J2000 ecliptic coordinates λ = 65° ± 15°, β = + 59° ± 15°, corresponding to equatorial coordinates RA = 22°, Dec = + 76°. However, we find that the mirror solution, also prograde, at λ = 275° ± 15°, β = + 50° ± 15° (or RA = 274°, Dec = + 27°), is also possible at the same confidence level, due to the intrinsic ambiguity of the photometric problem for observations performed close to the ecliptic plane.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we devise two novel algorithms for blind deconvolution based on a family of logarithmic image priors. In contrast to recent approaches, we consider a minimalistic formulation of the blind deconvolution problem where there are only two energy terms: a least-squares term for the data fidelity and an image prior based on a lower-bounded logarithm of the norm of the image gradients. We show that this energy formulation is sufficient to achieve the state of the art in blind deconvolution with a good margin over previous methods. Much of the performance is due to the chosen prior. On the one hand, this prior is very effective in favoring sparsity of the image gradients. On the other hand, this prior is non convex. Therefore, solutions that can deal effectively with local minima of the energy become necessary. We devise two iterative minimization algorithms that at each iteration solve convex problems: one obtained via the primal-dual approach and one via majorization-minimization. While the former is computationally efficient, the latter achieves state-of-the-art performance on a public dataset.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we propose a new method for fully-automatic landmark detection and shape segmentation in X-ray images. To detect landmarks, we estimate the displacements from some randomly sampled image patches to the (unknown) landmark positions, and then we integrate these predictions via a voting scheme. Our key contribution is a new algorithm for estimating these displacements. Different from other methods where each image patch independently predicts its displacement, we jointly estimate the displacements from all patches together in a data driven way, by considering not only the training data but also geometric constraints on the test image. The displacements estimation is formulated as a convex optimization problem that can be solved efficiently. Finally, we use the sparse shape composition model as the a priori information to regularize the landmark positions and thus generate the segmented shape contour. We validate our method on X-ray image datasets of three different anatomical structures: complete femur, proximal femur and pelvis. Experiments show that our method is accurate and robust in landmark detection, and, combined with the shape model, gives a better or comparable performance in shape segmentation compared to state-of-the art methods. Finally, a preliminary study using CT data shows the extensibility of our method to 3D data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The time variable Earth’s gravity field contains information about the mass transport within the system Earth, i.e., the relationship between mass variations in the atmosphere, oceans, land hydrology, and ice sheets. For many years, satellite laser ranging (SLR) observations to geodetic satellites have provided valuable information of the low-degree coefficients of the Earth’s gravity field. Today, the Gravity Recovery and Climate Experiment (GRACE) mission is the major source of information for the time variable field of a high spatial resolution. We recover the low-degree coefficients of the time variable Earth’s gravity field using SLR observations up to nine geodetic satellites: LAGEOS-1, LAGEOS-2, Starlette, Stella, AJISAI, LARES, Larets, BLITS, and Beacon-C. We estimate monthly gravity field coefficients up to degree and order 10/10 for the time span 2003–2013 and we compare the results with the GRACE-derived gravity field coefficients. We show that not only degree-2 gravity field coefficients can be well determined from SLR, but also other coefficients up to degree 10 using the combination of short 1-day arcs for low orbiting satellites and 10-day arcs for LAGEOS-1/2. In this way, LAGEOS-1/2 allow recovering zonal terms, which are associated with long-term satellite orbit perturbations, whereas the tesseral and sectorial terms benefit most from low orbiting satellites, whose orbit modeling deficiencies are minimized due to short 1-day arcs. The amplitudes of the annual signal in the low-degree gravity field coefficients derived from SLR agree with GRACE K-band results at a level of 77 %. This implies that SLR has a great potential to fill the gap between the current GRACE and the future GRACE Follow-On mission for recovering of the seasonal variations and secular trends of the longest wavelengths in gravity field, which are associated with the large-scale mass transport in the system Earth.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Empirical CODE Orbit Model (ECOM) of the Center for Orbit Determination in Europe (CODE), which was developed in the early 1990s, is widely used in the International GNSS Service (IGS) community. For a rather long time, spurious spectral lines are known to exist in geophysical parameters, in particular in the Earth Rotation Parameters (ERPs) and in the estimated geocenter coordinates, which could recently be attributed to the ECOM. These effects grew creepingly with the increasing influence of the GLONASS system in recent years in the CODE analysis, which is based on a rigorous combination of GPS and GLONASS since May 2003. In a first step we show that the problems associated with the ECOM are to the largest extent caused by the GLONASS, which was reaching full deployment by the end of 2011. GPS-only, GLONASS-only, and combined GPS/GLONASS solutions using the observations in the years 2009–2011 of a global network of 92 combined GPS/GLONASS receivers were analyzed for this purpose. In a second step we review direct solar radiation pressure (SRP) models for GNSS satellites. We demonstrate that only even-order short-period harmonic perturbations acting along the direction Sun-satellite occur for GPS and GLONASS satellites, and only odd-order perturbations acting along the direction perpendicular to both, the vector Sun-satellite and the spacecraft’s solar panel axis. Based on this insight we assess in the third step the performance of four candidate orbit models for the future ECOM. The geocenter coordinates, the ERP differences w. r. t. the IERS 08 C04 series of ERPs, the misclosures for the midnight epochs of the daily orbital arcs, and scale parameters of Helmert transformations for station coordinates serve as quality criteria. The old and updated ECOM are validated in addition with satellite laser ranging (SLR) observations and by comparing the orbits to those of the IGS and other analysis centers. Based on all tests, we present a new extended ECOM which substantially reduces the spurious signals in the geocenter coordinate z (by about a factor of 2–6), reduces the orbit misclosures at the day boundaries by about 10 %, slightly improves the consistency of the estimated ERPs with those of the IERS 08 C04 Earth rotation series, and substantially reduces the systematics in the SLR validation of the GNSS orbits.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Can the application of local anesthetics (Neural Therapy, NT) alone durably improve pain symptoms in referred patients with chronic and refractory pain? If the application of local anesthetics does lead to an improvement that far exceeds the duration of action of local anesthetics, we will postulate that a vicious circle of pain in the reflex arcs has been disrupted (hypothesis). METHODS: Case series design. We exclusively used procaine or lidocaine. The inclusion criteria were severe pain and chronic duration of more than three months, pain unresponsive to conventional medical measures, written referral from physicians or doctors of chiropractic explicitly to NT. Patients with improvement of pain who started on additional therapy during the study period for a reason other than pain were excluded in order to avoid a potential bias. Treatment success was measured after one year follow-up using the outcome measures of pain and analgesics intake. RESULTS: 280 chronic pain patients were included; the most common reason for referral was back pain. The average number of consultations per patient was 9.2 in the first year (median 8.0). After one year, in 60 patients pain was unchanged, 52 patients reported a slight improvement, 126 were considerably better, and 41 pain-free. At the same time, 74.1 % of the patients who took analgesics before starting NT needed less or no more analgesics at all. No adverse effects or complications were observed. CONCLUSIONS: The good long-term results of the targeted therapeutic local anesthesia (NT) in the most problematic group of chronic pain patients (unresponsive to all evidence based conventional treatment options) indicate that a vicious circle has been broken. The specific contribution of the intervention to these results cannot be determined. The low costs of local anesthetics, the small number of consultations needed, the reduced intake of analgesics, and the lack of adverse effects also suggest the practicality and cost-effectiveness of this kind of treatment. Controlled trials to evaluate the true effect of NT are needed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let Y be a stochastic process on [0,1] satisfying dY(t)=n 1/2 f(t)dt+dW(t) , where n≥1 is a given scale parameter (`sample size'), W is standard Brownian motion and f is an unknown function. Utilizing suitable multiscale tests, we construct confidence bands for f with guaranteed given coverage probability, assuming that f is isotonic or convex. These confidence bands are computationally feasible and shown to be asymptotically sharp optimal in an appropriate sense.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We explore a generalisation of the L´evy fractional Brownian field on the Euclidean space based on replacing the Euclidean norm with another norm. A characterisation result for admissible norms yields a complete description of all self-similar Gaussian random fields with stationary increments. Several integral representations of the introduced random fields are derived. In a similar vein, several non-Euclidean variants of the fractional Poisson field are introduced and it is shown that they share the covariance structure with the fractional Brownian field and converge to it. The shape parameters of the Poisson and Brownian variants are related by convex geometry transforms, namely the radial pth mean body and the polar projection transforms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the present contribution, we characterise law determined convex risk measures that have convex level sets at the level of distributions. By relaxing the assumptions in Weber (Math. Finance 16:419–441, 2006), we show that these risk measures can be identified with a class of generalised shortfall risk measures. As a direct consequence, we are able to extend the results in Ziegel (Math. Finance, 2014, http://onlinelibrary.wiley.com/doi/10.1111/mafi.12080/abstract) and Bellini and Bignozzi (Quant. Finance 15:725–733, 2014) on convex elicitable risk measures and confirm that expectiles are the only elicitable coherent risk measures. Further, we provide a simple characterisation of robustness for convex risk measures in terms of a weak notion of mixture continuity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Serpentinites release at sub-arc depths volatiles and several fluid-mobile trace elements found in arc magmas. Constraining element uptake in these rocks and defining the trace element composition of fluids released upon serpentinite dehydration can improve our understanding of mass transfer across subduction zones and to volcanic arcs. The eclogite-facies garnet metaperidotite and chlorite harzburgite bodies embedded in paragneiss of the subduction melange from Cima di Gagnone derive from serpentinized peridotite protoliths and are unique examples of ultramafic rocks that experienced subduction metasomatism and devolatilization. In these rocks, metamorphic olivine and garnet trap polyphase inclusions representing the fluid released during high-pressure breakdown of antigorite and chlorite. Combining major element mapping and laser-ablation ICP-MS bulk inclusion analysis, we characterize the mineral content of polyphase inclusions and quantify the fluid composition. Silicates, Cl-bearing phases, sulphides, carbonates, and oxides document post-entrapment mineral growth in the inclusions starting immediately after fluid entrapment. Compositional data reveal the presence of two different fluid types. The first (type A) records a fluid prominently enriched in fluid-mobile elements, with Cl, Cs, Pb, As, Sb concentrations up to 10(3) PM (primitive mantle), similar to 10(2) PM Tit Ba, while Rb, B, Sr, Li, U concentrations are of the order of 10(1) PM, and alkalis are similar to 2 PM. The second fluid (type B) has considerably lower fluid-mobile element enrichments, but its enrichment patterns are comparable to type A fluid. Our data reveal multistage fluid uptake in these peridotite bodies, including selective element enrichment during seafloor alteration, followed by fluid-rock interaction along with subduction metamorphism in the plate interface melange. Here, infiltration of sediment-equilibrated fluid produced significant enrichment of the serpentinites in As, Sb, B, Pb, an enriched trace element pattern that was then transferred to the fluid released at greater depth upon serpentine dehydration (type A fluid). The type B fluid hosted by garnet may record the composition of the chlorite breakdown fluid released at even greater depth. The Gagnone study-case demonstrates that serpentinized peridotites acquire water and fluid-mobile elements during ocean floor hydration and through exchange with sediment-equilibrated fluids in the early subduction stages. Subsequent antigorite devolatilization at subarc depths delivers aqueous fluids to the mantle wedge that can be prominently enriched in sediment-derived components, potentially triggering arc magmatism without the need of concomitant dehydration/melting of metasediments or altered oceanic crust.