938 resultados para linear-zigzag stuctural instability
Resumo:
The Cinque Torri group (Cortina d'Ampezzo, Italy) is an articulated system of unstable carbonatic rock monoliths located in a very important tourism area and therefore characterized by a significant risk. The instability phenomena involved represent an example of lateral spreading developed over a larger deep seated gravitational slope deformation (DSGSD) area. After the recent fall of a monolith of more than 10 000 m3, a scientific study was initiated to monitor the more unstable sectors and to characterize the past movements as a fundamental tool for predicting future movements and hazard assessment. To achieve greater insight on the ongoing lateral spreading process, a method for a quantitative analysis of rotational movements associated with the lateral spreading has been developed, applied and validated. The method is based on: i) detailed geometrical characterization of the area by means of laser scanner techniques; ii) recognition of the discontinuity sets and definition of a reference frame for each set, iii) correlation between the obtained reference frames related to a specific sector and a stable external reference frame, and iv) determination of the 3D rotations in terms of Euler angles to describe the present settlement of the Cinque Torri system with respect to the surrounding stable areas. In this way, significant information on the processes involved in the fragmentation and spreading of a former dolomitic plateau into different rock cliffs has been gained. The method is suitable to be applied to similar case studies.
Gaussian estimates for the density of the non-linear stochastic heat equation in any space dimension
Resumo:
In this paper, we establish lower and upper Gaussian bounds for the probability density of the mild solution to the stochastic heat equation with multiplicative noise and in any space dimension. The driving perturbation is a Gaussian noise which is white in time with some spatially homogeneous covariance. These estimates are obtained using tools of the Malliavin calculus. The most challenging part is the lower bound, which is obtained by adapting a general method developed by Kohatsu-Higa to the underlying spatially homogeneous Gaussian setting. Both lower and upper estimates have the same form: a Gaussian density with a variance which is equal to that of the mild solution of the corresponding linear equation with additive noise.
Resumo:
In economic literature, information deficiencies and computational complexities have traditionally been solved through the aggregation of agents and institutions. In inputoutput modelling, researchers have been interested in the aggregation problem since the beginning of 1950s. Extending the conventional input-output aggregation approach to the social accounting matrix (SAM) models may help to identify the effects caused by the information problems and data deficiencies that usually appear in the SAM framework. This paper develops the theory of aggregation and applies it to the social accounting matrix model of multipliers. First, we define the concept of linear aggregation in a SAM database context. Second, we define the aggregated partitioned matrices of multipliers which are characteristic of the SAM approach. Third, we extend the analysis to other related concepts, such as aggregation bias and consistency in aggregation. Finally, we provide an illustrative example that shows the effects of aggregating a social accounting matrix model.
Resumo:
We examine whether and how main central banks responded to episodes of financial stress over the last three decades. We employ a new methodology for monetary policy rules estimation, which allows for time-varying response coefficients as well as corrects for endogeneity. This flexible framework applied to the U.S., U.K., Australia, Canada and Sweden together with a new financial stress dataset developed by the International Monetary Fund allows not only testing whether the central banks responded to financial stress but also detects the periods and type of stress that were the most worrying for monetary authorities and to quantify the intensity of policy response. Our findings suggest that central banks often change policy
Resumo:
Graph pebbling is a network model for studying whether or not a given supply of discrete pebbles can satisfy a given demand via pebbling moves. A pebbling move across an edge of a graph takes two pebbles from one endpoint and places one pebble at the other endpoint; the other pebble is lost in transit as a toll. It has been shown that deciding whether a supply can meet a demand on a graph is NP-complete. The pebbling number of a graph is the smallest t such that every supply of t pebbles can satisfy every demand of one pebble. Deciding if the pebbling number is at most k is NP 2 -complete. In this paper we develop a tool, called theWeight Function Lemma, for computing upper bounds and sometimes exact values for pebbling numbers with the assistance of linear optimization. With this tool we are able to calculate the pebbling numbers of much larger graphs than in previous algorithms, and much more quickly as well. We also obtain results for many families of graphs, in many cases by hand, with much simpler and remarkably shorter proofs than given in previously existing arguments (certificates typically of size at most the number of vertices times the maximum degree), especially for highly symmetric graphs. Here we apply theWeight Function Lemma to several specific graphs, including the Petersen, Lemke, 4th weak Bruhat, Lemke squared, and two random graphs, as well as to a number of infinite families of graphs, such as trees, cycles, graph powers of cycles, cubes, and some generalized Petersen and Coxeter graphs. This partly answers a question of Pachter, et al., by computing the pebbling exponent of cycles to within an asymptotically small range. It is conceivable that this method yields an approximation algorithm for graph pebbling.
Resumo:
The problem of finding a feasible solution to a linear inequality system arises in numerous contexts. In [12] an algorithm, called extended relaxation method, that solves the feasibility problem, has been proposed by the authors. Convergence of the algorithm has been proven. In this paper, we onsider a class of extended relaxation methods depending on a parameter and prove their convergence. Numerical experiments have been provided, as well.
Resumo:
Although aneuploidy has many possible causes, it often results from underlying chromosomal instability (CIN) leading to an unstable karyotype with cell-to-cell variation and multiple subclones. To test for the presence of CIN in high hyperdiploid acute lymphoblastic leukemia (HeH ALL) at diagnosis, we investigated 20 patients (10 HeH ALL and 10 non-HeH ALL), using automated four-color interphase fluorescence in situ hybridization (I-FISH) with centromeric probes for chromosomes 4, 6, 10, and 17. In HeH ALL, the proportion of abnormal cells ranged from 36.3% to 92.4%, and a variety of aneuploid populations were identified. Compared with conventional cytogenetics, I-FISH revealed numerous additional clones, some of them very small. To investigate the nature and origin of this clonal heterogeneity, we determined average numerical CIN values for all four chromosomes together and for each chromosome and patient group. The CIN values in HeH ALL were relatively high (range, 22.2-44.7%), compared with those in non-HeH ALL (3.2-6.4%), thus accounting for the presence of numerical CIN in HeH ALL at diagnosis. We conclude that numerical CIN may be at the origin of the high level of clonal heterogeneity revealed by I-FISH in HeH ALL at presentation, which would corroborate the potential role of CIN in tumor pathogenesis.
Resumo:
We study preconditioning techniques for discontinuous Galerkin discretizations of isotropic linear elasticity problems in primal (displacement) formulation. We propose subspace correction methods based on a splitting of the vector valued piecewise linear discontinuous finite element space, that are optimal with respect to the mesh size and the Lamé parameters. The pure displacement, the mixed and the traction free problems are discussed in detail. We present a convergence analysis of the proposed preconditioners and include numerical examples that validate the theory and assess the performance of the preconditioners.
Resumo:
INTRODUCTION. Both hypocapnia and hypercapnia can be deleterious to brain injured patients. Strict PaCO2 control is difficult to achieve because of patient's instability and unpredictable effects of ventilator settings changes. OBJECTIVE. The aim of this study was to evaluate our ability to comply with a protocol of controlled mechanical ventilation (CMV) aiming at a PaCO2 between 35 and 40 mmHg in patients requiring neuro-resuscitation. METHODS. Retrospective analysis of consecutive patients (2005-2011) requiring intracranial pressure (ICP) monitoring for traumatic brain injury (TBI), subarachnoid haemorrhage (SAH), intracranial haemorrhage (ICH) or ischemic stroke (IS). Demographic data, GCS, SAPS II, hospital mortality, PaCO2 and ICP values were recorded. During CMV in the first 48 h after admission, we analyzed the time spent within the PaCO2 target in relation to the presence or absence of intracranial hypertension (ICP[20 mmHg, by periods of 30 min) (Table 1). We also compared the fraction of time (determined by linear interpolation) spent with normal, low or high PaCO2 in hospital survivors and non-survivors (Wilcoxon, Bonferroni correction, p\0.05) (Table 2). PaCO2 samples collected during and after apnoea tests were excluded. Results given as median [IQR]. RESULTS. 436 patients were included (TBI: 51.2 %, SAH: 20.6 %, ICH: 23.2 %, IS: 5.0 %), age: 54 [39-64], SAPS II score: 52 [41-62], GCS: 5 [3-8]. 8744 PaCO2 samples were collected during 150611 h of CMV. CONCLUSIONS. Despite a high number of PaCO2 samples collected (in average one sample every 107 min), our results show that patients undergoing CMV for neuro- resuscitation spent less than half of the time within the pre-defined PaCO2 range. During documented intracranial hypertension, hypercapnia was observed in 17.4 % of the time. Since non-survivors spent more time with hypocapnia, further analysis is required to determine whether hypocapnia was detrimental per se, or merely reflects increased severity of brain insult.
Resumo:
PURPOSE: To determine the local control and complication rates for children with papillary and/or macular retinoblastoma progressing after chemotherapy and undergoing stereotactic radiotherapy (SRT) with a micromultileaf collimator. METHODS AND MATERIALS: Between 2004 and 2008, 11 children (15 eyes) with macular and/or papillary retinoblastoma were treated with SRT. The mean age was 19 months (range, 2-111). Of the 15 eyes, 7, 6, and 2 were classified as International Classification of Intraocular Retinoblastoma Group B, C, and E, respectively. The delivered dose of SRT was 50.4 Gy in 28 fractions using a dedicated micromultileaf collimator linear accelerator. RESULTS: The median follow-up was 20 months (range, 13-39). Local control was achieved in 13 eyes (87%). The actuarial 1- and 2-year local control rates were both 82%. SRT was well tolerated. Late adverse events were reported in 4 patients. Of the 4 patients, 2 had developed focal microangiopathy 20 months after SRT; 1 had developed a transient recurrence of retinal detachment; and 1 had developed bilateral cataracts. No optic neuropathy was observed. CONCLUSIONS: Linear accelerator-based SRT for papillary and/or macular retinoblastoma in children resulted in excellent tumor control rates with acceptable toxicity. Additional research regarding SRT and its intrinsic organ-at-risk sparing capability is justified in the framework of prospective trials.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the scale of a field site represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed downscaling procedure based on a non-linear Bayesian sequential simulation approach. The main objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity logged at collocated wells and surface resistivity measurements, which are available throughout the studied site. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariatekernel density function. Then a stochastic integration of low-resolution, large-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities is applied. The overall viability of this downscaling approach is tested and validated by comparing flow and transport simulation through the original and the upscaled hydraulic conductivity fields. Our results indicate that the proposed procedure allows obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.
Resumo:
This paper introduces local distance-based generalized linear models. These models extend (weighted) distance-based linear models firstly with the generalized linear model concept, then by localizing. Distances between individuals are the only predictor information needed to fit these models. Therefore they are applicable to mixed (qualitative and quantitative) explanatory variables or when the regressor is of functional type. Models can be fitted and analysed with the R package dbstats, which implements several distancebased prediction methods.