898 resultados para Linear matrix inequalities (LMI) techniques
Resumo:
In a network of competing species, a competitive intransitivity occurs when the ranking of competitive abilities does not follow a linear hierarchy (A > B > C but C > A). A variety of mathematical models suggests that intransitive networks can prevent or slow down competitive exclusion and maintain biodiversity by enhancing species coexistence. However, it has been difficult to assess empirically the relative importance of intransitive competition because a large number of pairwise species competition experiments are needed to construct a competition matrix that is used to parameterize existing models. Here we introduce a statistical framework for evaluating the contribution of intransitivity to community structure using species abundance matrices that are commonly generated from replicated sampling of species assemblages. We provide metrics and analytical methods for using abundance matrices to estimate species competition and patch transition matrices by using reverse-engineering and a colonization-competition model. These matrices provide complementary metrics to estimate the degree of intransitivity in the competition network of the sampled communities. Benchmark tests reveal that the proposed methods could successfully detect intransitive competition networks, even in the absence of direct measures of pairwise competitive strength. To illustrate the approach, we analyzed patterns of abundance and biomass of five species of necrophagous Diptera and eight species of their hymenopteran parasitoids that co-occur in beech forests in Germany. We found evidence for a strong competitive hierarchy within communities of flies and parasitoids. However, for parasitoids, there was a tendency towards increasing intransitivity in higher weight classes, which represented larger resource patches. These tests provide novel methods for empirically estimating the degree of intransitivity in competitive networks from observational datasets. They can be applied to experimental measures of pairwise species interactions, as well as to spatio-temporal samples of assemblages in homogenous environments or environmental gradients.
Resumo:
The chemical and isotopic characterization of porewater residing in the inter- and intragranular pore space of the low-permeability rock matrix is an important component with respect to the site characterization and safety assessment of potential host rocks for a radioactive waste disposal. The chemical and isotopic composition of porewater in such low permeability rocks has to be derived by indirect extraction techniques applied to naturally saturated rock material. In most of such indirect extraction techniques – especially in case of rocks of a porosity below about 2 vol.% – the original porewater concentrations are diluted and need to be back-calculated to in-situ concentrations. This requires a well-defined value for the connected porosity – accessible to different solutes under in-situ conditions. The derivation of such porosity values, as well as solute concentrations, is subject to various perturbations during drilling, core sampling, storage and experiments in the laboratory. The present study aims to demonstrate the feasibility of a variety of these techniques to charac-terize porewater and solute transport in crystalline rocks. The methods, which have been de-veloped during multiple porewater studies in crystalline environments, were applied on four core samples from the deep borehole DH-GAP04, drilled in the Kangerlussuaq area, Southwest Greenland, as part of the joint NWMO–Posiva–SKB Greenland Analogue Project (GAP). Potential artefacts that can influence the estimation of in situ porewater chemistry and isotopes, as well as their controls, are described in detail in this report, using specific examples from borehole DH-GAP04
Resumo:
BACKGROUND Since the pioneering work of Jacobson and Suarez, microsurgery has steadily progressed and is now used in all surgical specialities, particularly in plastic surgery. Before performing clinical procedures it is necessary to learn the basic techniques in the laboratory. OBJECTIVE To assess an animal model, thereby circumventing the following issues: ethical rules, cost, anesthesia and training time. METHODS Between July 2012 and September 2012, 182 earthworms were used for 150 microsurgical trainings to simulate discrepancy microanastomoses. Training was undertaken over 10 weekly periods. Each training session included 15 simulations of microanastomoses performed using the Harashina technique (earthworm diameters >1.5 mm [n=5], between 1.0 mm and 1.5 mm [n=5], and <1.0 mm [n=5]). The technique is presented and documented. A linear model with main variable as the number of the week (as a numeric covariate) and the size of the animal (as a factor) was used to determine the trend in time of anastomosis over subsequent weeks as well as differences between the different size groups. RESULTS The linear model showed a significant trend (P<0.001) in time of anastomosis in the course of the training, as well as significant differences (P<0.001) between the groups of animal of different sizes. For diameter >1.5 mm, mean anastomosis time decreased from 19.6±1.9 min to 12.6±0.7 min between the first and last week of training. For training involving smaller diameters, the results showed a reduction in execution time of 36.1% (P<0.01) (diameter between 1.0 mm and 1.5 mm) and 40.6% (P<0.01) (diameter <1.0 mm) between the first and last weeks. The study demonstrates an improvement in the dexterity and speed of nodes' execution. CONCLUSION The earthworm appears to be a reliable experimental model for microsurgical training of discrepancy microanastomoses. Its numerous advantages, as discussed in the present report, show that this model of training will significantly grow and develop in the near future.
Resumo:
An efficient and reliable automated model that can map physical Soil and Water Conservation (SWC) structures on cultivated land was developed using very high spatial resolution imagery obtained from Google Earth and ArcGIS, ERDAS IMAGINE, and SDC Morphology Toolbox for MATLAB and statistical techniques. The model was developed using the following procedures: (1) a high-pass spatial filter algorithm was applied to detect linear features, (2) morphological processing was used to remove unwanted linear features, (3) the raster format was vectorized, (4) the vectorized linear features were split per hectare (ha) and each line was then classified according to its compass direction, and (5) the sum of all vector lengths per class of direction per ha was calculated. Finally, the direction class with the greatest length was selected from each ha to predict the physical SWC structures. The model was calibrated and validated on the Ethiopian Highlands. The model correctly mapped 80% of the existing structures. The developed model was then tested at different sites with different topography. The results show that the developed model is feasible for automated mapping of physical SWC structures. Therefore, the model is useful for predicting and mapping physical SWC structures areas across diverse areas.
Resumo:
BACKGROUND The aim of this study was to evaluate the accuracy of linear measurements on three imaging modalities: lateral cephalograms from a cephalometric machine with a 3 m source-to-mid-sagittal-plane distance (SMD), from a machine with 1.5 m SMD and 3D models from cone-beam computed tomography (CBCT) data. METHODS Twenty-one dry human skulls were used. Lateral cephalograms were taken, using two cephalometric devices: one with a 3 m SMD and one with a 1.5 m SMD. CBCT scans were taken by 3D Accuitomo® 170, and 3D surface models were created in Maxilim® software. Thirteen linear measurements were completed twice by two observers with a 4 week interval. Direct physical measurements by a digital calliper were defined as the gold standard. Statistical analysis was performed. RESULTS Nasion-Point A was significantly different from the gold standard in all methods. More statistically significant differences were found on the measurements of the 3 m SMD cephalograms in comparison to the other methods. Intra- and inter-observer agreement based on 3D measurements was slightly better than others. LIMITATIONS Dry human skulls without soft tissues were used. Therefore, the results have to be interpreted with caution, as they do not fully represent clinical conditions. CONCLUSIONS 3D measurements resulted in a better observer agreement. The accuracy of the measurements based on CBCT and 1.5 m SMD cephalogram was better than a 3 m SMD cephalogram. These findings demonstrated the linear measurements accuracy and reliability of 3D measurements based on CBCT data when compared to 2D techniques. Future studies should focus on the implementation of 3D cephalometry in clinical practice.
Resumo:
In applied work economists often seek to relate a given response variable y to some causal parameter mu* associated with it. This parameter usually represents a summarization based on some explanatory variables of the distribution of y, such as a regression function, and treating it as a conditional expectation is central to its identification and estimation. However, the interpretation of mu* as a conditional expectation breaks down if some or all of the explanatory variables are endogenous. This is not a problem when mu* is modelled as a parametric function of explanatory variables because it is well known how instrumental variables techniques can be used to identify and estimate mu*. In contrast, handling endogenous regressors in nonparametric models, where mu* is regarded as fully unknown, presents di±cult theoretical and practical challenges. In this paper we consider an endogenous nonparametric model based on a conditional moment restriction. We investigate identification related properties of this model when the unknown function mu* belongs to a linear space. We also investigate underidentification of mu* along with the identification of its linear functionals. Several examples are provided in order to develop intuition about identification and estimation for endogenous nonparametric regression and related models.
Resumo:
In this paper, we extend the debate concerning Credit Default Swap valuation to include time varying correlation and co-variances. Traditional multi-variate techniques treat the correlations between covariates as constant over time; however, this view is not supported by the data. Secondly, since financial data does not follow a normal distribution because of its heavy tails, modeling the data using a Generalized Linear model (GLM) incorporating copulas emerge as a more robust technique over traditional approaches. This paper also includes an empirical analysis of the regime switching dynamics of credit risk in the presence of liquidity by following the general practice of assuming that credit and market risk follow a Markov process. The study was based on Credit Default Swap data obtained from Bloomberg that spanned the period January 1st 2004 to August 08th 2006. The empirical examination of the regime switching tendencies provided quantitative support to the anecdotal view that liquidity decreases as credit quality deteriorates. The analysis also examined the joint probability distribution of the credit risk determinants across credit quality through the use of a copula function which disaggregates the behavior embedded in the marginal gamma distributions, so as to isolate the level of dependence which is captured in the copula function. The results suggest that the time varying joint correlation matrix performed far superior as compared to the constant correlation matrix; the centerpiece of linear regression models.
Resumo:
Lung damage is a common side effect of chemotherapeutic drugs such as bleomycin. This study used a bleomycin mouse model which simulates the lung damage observed in humans. Noninvasive, in vivo cone-beam computed tomography (CBCT) was used to visualize and quantify fibrotic and inflammatory damage over the entire lung volume of mice. Bleomycin was used to induce pulmonary damage in vivo and the results from two CBCT systems, a micro-CT and flat panel CT (fpCT), were compared to histologic measurements, the standard method of murine lung damage quantification. Twenty C57BL/6 mice were given either 3 U/kg of bleomycin or saline intratracheally. The mice were scanned at baseline, before the administration of bleomycin, and then 10, 14, and 21 days afterward. At each time point, a subset of mice was sacrificed for histologic analysis. The resulting CT images were used to assess lung volume. Percent lung damage (PLD) was calculated for each mouse on both the fpCT (PLDfpcT) and the micro-CT (PLDμCT). Histologic PLD (PLDH) was calculated for each histologic section at each time point (day 10, n = 4; day 14, n = 4; day 21, n = 5; control group, n = 5). A linear regression was applied to the PLDfpCT vs. PLDH, PLDμCT vs. PLDH and PLDfpCT vs. PLDμCT distributions. This study did not demonstrate strong correlations between PLDCT and PLDH. The coefficient of determination, R, was 0.68 for PLDμCT vs. PLDH and 0.75 for the PLD fpCT vs. PLDH. The experimental issues identified from this study were: (1) inconsistent inflation of the lungs from scan to scan, (2) variable distribution of damage (one histologic section not representative of overall lung damage), (3) control mice not scanned with each group of bleomycin mice, (4) two CT systems caused long anesthesia time for the mice, and (5) respiratory gating did not hold the volume of lung constant throughout the scan. Addressing these issues might allow for further improvement of the correlation between PLDCT and PLDH. ^
Resumo:
Objectives. This paper seeks to assess the effect on statistical power of regression model misspecification in a variety of situations. ^ Methods and results. The effect of misspecification in regression can be approximated by evaluating the correlation between the correct specification and the misspecification of the outcome variable (Harris 2010).In this paper, three misspecified models (linear, categorical and fractional polynomial) were considered. In the first section, the mathematical method of calculating the correlation between correct and misspecified models with simple mathematical forms was derived and demonstrated. In the second section, data from the National Health and Nutrition Examination Survey (NHANES 2007-2008) were used to examine such correlations. Our study shows that comparing to linear or categorical models, the fractional polynomial models, with the higher correlations, provided a better approximation of the true relationship, which was illustrated by LOESS regression. In the third section, we present the results of simulation studies that demonstrate overall misspecification in regression can produce marked decreases in power with small sample sizes. However, the categorical model had greatest power, ranging from 0.877 to 0.936 depending on sample size and outcome variable used. The power of fractional polynomial model was close to that of linear model, which ranged from 0.69 to 0.83, and appeared to be affected by the increased degrees of freedom of this model.^ Conclusion. Correlations between alternative model specifications can be used to provide a good approximation of the effect on statistical power of misspecification when the sample size is large. When model specifications have known simple mathematical forms, such correlations can be calculated mathematically. Actual public health data from NHANES 2007-2008 were used as examples to demonstrate the situations with unknown or complex correct model specification. Simulation of power for misspecified models confirmed the results based on correlation methods but also illustrated the effect of model degrees of freedom on power.^
Resumo:
With continuous new improvements in brachytherapy source designs and techniques, method of 3D dosimetry for treatment dose verifications would better ensure accurate patient radiotherapy treatment. This study was aimed to first evaluate the 3D dose distributions of the low-dose rate (LDR) Amersham 6711 OncoseedTM using PRESAGE® dosimeters to establish PRESAGE® as a suitable brachytherapy dosimeter. The new AgX100 125I seed model (Theragenics Corporation) was then characterized using PRESAGE® following the TG-43 protocol. PRESAGE® dosimeters are solid, polyurethane-based, 3D dosimeters doped with radiochromic leuco dyes that produce a linear optical density response to radiation dose. For this project, the radiochromic response in PRESAGE® was captured using optical-CT scanning (632 nm) and the final 3D dose matrix was reconstructed using the MATLAB software. An Amersham 6711 seed with an air-kerma strength of approximately 9 U was used to irradiate two dosimeters to 2 Gy and 11 Gy at 1 cm to evaluate dose rates in the r=1 cm to r=5 cm region. The dosimetry parameters were compared to the values published in the updated AAPM Report No. 51 (TG-43U1). An AgX100 seed with an air-kerma strength of about 6 U was used to irradiate two dosimeters to 3.6 Gy and 12.5 Gy at 1 cm. The dosimetry parameters for the AgX100 were compared to the values measured from previous Monte-Carlo and experimental studies. In general, the measured dose rate constant, anisotropy function, and radial dose function for the Amersham 6711 showed agreements better than 5% compared to consensus values in the r=1 to r=3 cm region. The dose rates and radial dose functions measured for the AgX100 agreed with the MCNPX and TLD-measured values within 3% in the r=1 to r=3 cm region. The measured anisotropy function in PRESAGE® showed relative differences of up to 9% with the MCNPX calculated values. It was determined that post-irradiation optical density change over several days was non-linear in different dose regions, and therefore the dose values in the r=4 to r=5 cm regions had higher uncertainty due to this effect. This study demonstrated that within the radial distance of 3 cm, brachytherapy dosimetry in PRESAGE® can be accurate within 5% as long as irradiation times are within 48 hours.
Resumo:
The aim of this work is to solve a question raised for average sampling in shift-invariant spaces by using the well-known matrix pencil theory. In many common situations in sampling theory, the available data are samples of some convolution operator acting on the function itself: this leads to the problem of average sampling, also known as generalized sampling. In this paper we deal with the existence of a sampling formula involving these samples and having reconstruction functions with compact support. Thus, low computational complexity is involved and truncation errors are avoided. In practice, it is accomplished by means of a FIR filter bank. An answer is given in the light of the generalized sampling theory by using the oversampling technique: more samples than strictly necessary are used. The original problem reduces to finding a polynomial left inverse of a polynomial matrix intimately related to the sampling problem which, for a suitable choice of the sampling period, becomes a matrix pencil. This matrix pencil approach allows us to obtain a practical method for computing the compactly supported reconstruction functions for the important case where the oversampling rate is minimum. Moreover, the optimality of the obtained solution is established.
Resumo:
This work provides the development of an antenna for satellite communications onboard systems based on the recommendations ITU-R S.580-6 [1] and ITU-R S.465-5 [2]. The antenna consists of printed elements grouped in an array, working in a frequency band from 7.25 up to 8.4 GHz (15% of bandwidth). In this working band, transmission and reception are included simultaneously. The antenna reaches a gain about 31 dBi, has a radiation pattern with a beam width smaller than 10oand dual circular polarization. It has the capability to steer in elevation through a Butler matrix to 45
Resumo:
We introduce in this paper a method to calculate the Hessenberg matrix of a sum of measures from the Hessenberg matrices of the component measures. Our method extends the spectral techniques used by G. Mantica to calculate the Jacobi matrix associated with a sum of measures from the Jacobi matrices of each of the measures. We apply this method to approximate the Hessenberg matrix associated with a self-similar measure and compare it with the result obtained by a former method for self-similar measures which uses a fixed point theorem for moment matrices. Results are given for a series of classical examples of self-similar measures. Finally, we also apply the method introduced in this paper to some examples of sums of (not self-similar) measures obtaining the exact value of the sections of the Hessenberg matrix.
Resumo:
We present a novel approach for detecting severe obstructive sleep apnea (OSA) cases by introducing non-linear analysis into sustained speech characterization. The proposed scheme was designed for providing additional information into our baseline system, built on top of state-of-the-art cepstral domain modeling techniques, aiming to improve accuracy rates. This new information is lightly correlated with our previous MFCC modeling of sustained speech and uncorrelated with the information in our continuous speech modeling scheme. Tests have been performed to evaluate the improvement for our detection task, based on sustained speech as well as combined with a continuous speech classifier, resulting in a 10% relative reduction in classification for the first and a 33% relative reduction for the fused scheme. Results encourage us to consider the existence of non-linear effects on OSA patients' voices, and to think about tools which could be used to improve short-time analysis.
Resumo:
Instability analysis of compressible orthogonal swept leading-edge boundary layer flow was performed in the context of BiGlobal linear theory. 1, 2 An algorithm was developed exploiting the sparsity characteristics of the matrix discretizing the PDE-based eigenvalue problem. This allowed use of the MUMPS sparse linear algebra package 3 to obtain a direct solution of the linear systems associated with the Arnoldi iteration. The developed algorithm was then applied to efficiently analyze the effect of compressibility on the stability of the swept leading-edge boundary layer and obtain neutral curves of this flow as a function of the Mach number in the range 0 ≤ Ma ≤ 1. The present numerical results fully confirmed the asymptotic theory results of Theofilis et al. 4 Up to the maximum Mach number value studied, it was found that an increase of this parameter reduces the critical Reynolds number and the range of the unstable spanwise wavenumbers.