616 resultados para Weighting
Resumo:
Scarcities of environmental services are no longer merely a remote hypothesis. Consequently, analysis of their inequalities between nations becomes of paramount importance for the achievement of sustainability in terms either of international policy, or of Universalist ethical principles of equity. This paper aims, on the one hand, at revising methodological aspects of the inequality measurement of certain environmental data and, on the other, at extending the scarce empirical evidence relating to the international distribution of Ecological Footprint (EF), by using a longer EF time series. Most of the techniques currently important in the literature are revised and then tested on EF data with interesting results. We look in depth at Lorenz dominance analyses and consider the underlying properties of different inequality indices. Those indices which fit best with environmental inequality measurements are CV2 and GE(2) because of their neutrality property, however a trade-off may occur when subgroup decompositions are performed. A weighting factor decomposition method is proposed in order to isolate weighting factor changes in inequality growth rates. Finally, the only non-ambiguous way of decomposing inequality by source is the natural decomposition of CV2, which additionally allows the interpretation of marginal term contributions. Empirically, this paper contributes to the environmental inequality measurement of EF: this inequality has been quite stable and its change over time is due to per capita vector changes rather than population changes. Almost the entirety of the EF inequality is explainable by differences in the means between the countries of the World Bank group. This finding suggests that international environmental agreements should be attempted on a regional basis in an attempt to achieve greater consensus between the parties involved. Additionally, source decomposition warns of the dangers of confining CO2 emissions reduction to crop-based energies because of the implications for basic needs satisfaction.
Resumo:
Clinical use of the Stejskal-Tanner diffusion weighted images is hampered by the geometric distortions that result from the large residual 3-D eddy current field induced. In this work, we aimed to predict, using linear response theory, the residual 3-D eddy current field required for geometric distortion correction based on phantom eddy current field measurements. The predicted 3-D eddy current field induced by the diffusion-weighting gradients was able to reduce the root mean square error of the residual eddy current field to ~1 Hz. The model's performance was tested on diffusion weighted images of four normal volunteers, following distortion correction, the quality of the Stejskal-Tanner diffusion-weighted images was found to have comparable quality to image registration based corrections (FSL) at low b-values. Unlike registration techniques the correction was not hindered by low SNR at high b-values, and results in improved image quality relative to FSL. Characterization of the 3-D eddy current field with linear response theory enables the prediction of the 3-D eddy current field required to correct eddy current induced geometric distortions for a wide range of clinical and high b-value protocols.
Resumo:
Scarcities of environmental services are no longer merely a remote hypothesis. Consequently, analysis of their inequalities between nations becomes of paramount importance for the achievement of sustainability in terms either of international policy, or of Universalist ethical principles of equity. This paper aims, on the one hand, at revising methodological aspects of the inequality measurement of certain environmental data and, on the other, at extending the scarce empirical evidence relating to the international distribution of Ecological Footprint (EF), by using a longer EF time series. Most of the techniques currently important in the literature are revised and then tested on EF data with interesting results. We look in depth at Lorenz dominance analyses and consider the underlying properties of different inequality indices. Those indices which fit best with environmental inequality measurements are CV2 and GE(2) because of their neutrality property, however a trade-off may occur when subgroup decompositions are performed. A weighting factor decomposition method is proposed in order to isolate weighting factor changes in inequality growth rates. Finally, the only non-ambiguous way of decomposing inequality by source is the natural decomposition of CV2, which additionally allows the interpretation of marginal term contributions. Empirically, this paper contributes to the environmental inequality measurement of EF: this inequality has been quite stable and its change over time is due to per capita vector changes rather than population changes. Almost the entirety of the EF inequality is explainable by differences in the means between the countries of the World Bank group. This finding suggests that international environmental agreements should be attempted on a regional basis in an attempt to achieve greater consensus between the parties involved. Additionally, source decomposition warns of the dangers of confining CO2 emissions reduction to crop-based energies because of the implications for basic needs satisfaction. Keywords: ecological footprint; ecological inequality measurement, inequality decomposition.
Resumo:
Raised blood pressure (BP) is a major risk factor for cardiovascular disease. Previous studies have identified 47 distinct genetic variants robustly associated with BP, but collectively these explain only a few percent of the heritability for BP phenotypes. To find additional BP loci, we used a bespoke gene-centric array to genotype an independent discovery sample of 25,118 individuals that combined hypertensive case-control and general population samples. We followed up four SNPs associated with BP at our p < 8.56 × 10(-7) study-specific significance threshold and six suggestively associated SNPs in a further 59,349 individuals. We identified and replicated a SNP at LSP1/TNNT3, a SNP at MTHFR-NPPB independent (r(2) = 0.33) of previous reports, and replicated SNPs at AGT and ATP2B1 reported previously. An analysis of combined discovery and follow-up data identified SNPs significantly associated with BP at p < 8.56 × 10(-7) at four further loci (NPR3, HFE, NOS3, and SOX6). The high number of discoveries made with modest genotyping effort can be attributed to using a large-scale yet targeted genotyping array and to the development of a weighting scheme that maximized power when meta-analyzing results from samples ascertained with extreme phenotypes, in combination with results from nonascertained or population samples. Chromatin immunoprecipitation and transcript expression data highlight potential gene regulatory mechanisms at the MTHFR and NOS3 loci. These results provide candidates for further study to help dissect mechanisms affecting BP and highlight the utility of studying SNPs and samples that are independent of those studied previously even when the sample size is smaller than that in previous studies.
Resumo:
The problem of stability analysis for a class of neutral systems with mixed time-varying neutral, discrete and distributed delays and nonlinear parameter perturbations is addressed. By introducing a novel Lyapunov-Krasovskii functional and combining the descriptor model transformation, the Leibniz-Newton formula, some free-weighting matrices, and a suitable change of variables, new sufficient conditions are established for the stability of the considered system, which are neutral-delay-dependent, discrete-delay-range dependent, and distributeddelay-dependent. The conditions are presented in terms of linear matrix inequalities (LMIs) and can be efficiently solved using convex programming techniques. Two numerical examples are given to illustrate the efficiency of the proposed method
Resumo:
The H∞ synchronization problem of the master and slave structure of a second-order neutral master-slave systems with time-varying delays is presented in this paper. Delay-dependent sufficient conditions for the design of a delayed output-feedback control are given by Lyapunov-Krasovskii method in terms of a linear matrix inequality (LMI). A controller, which guarantees H∞ synchronization of the master and slave structure using some free weighting matrices, is then developed. A numerical example has been given to show the effectiveness of the method
Resumo:
Contextual effects on child health have been investigated extensively in previous research. However, few studies have considered the interplay between community characteristics and individual-level variables. This study examines the influence of community education and family socioeconomic characteristics on child health (as measured by height and weight-for-age Z-scores), as well as their interactions. We adapted the Commission on Social Determinants of Health (CSDH) framework to the context of child health. Using data from the 2010 Colombian Demographic and Health Survey (DHS), weighted multilevel models are fitted since the data are not self-weighting. The results show a positive impact of the level of education of other women in the community on child health, even after controlling for individual and family socioeconomic characteristics. Different pathways through which community education can substitute for the effect of family characteristics on child nutrition are found. The interaction terms highlight the importance of community education as a moderator of the impact of the mother’s own education and autonomy, on child health. In addition, the results reveal differences between height and weight-for-age indicators in their responsiveness to individual and contextual factors. Our findings suggest that community intervention programmes may have differential effects on child health. Therefore, their identification can contribute to a better targeting of child care policies.
Resumo:
The biological and therapeutic responses to hyperthermia, when it is envisaged as an anti-tumor treatment modality, are complex and variable. Heat delivery plays a critical role and is counteracted by more or less efficient body cooling, which is largely mediated by blood flow. In the case of magnetically mediated modality, the delivery of the magnetic particles, most often superparamagnetic iron oxide nanoparticles (SPIONs), is also critically involved. We focus here on the magnetic characterization of two injectable formulations able to gel in situ and entrap silica microparticles embedding SPIONs. These formulations have previously shown suitable syringeability and intratumoral distribution in vivo. The first formulation is based on alginate, and the second on a poly(ethylene-co-vinyl alcohol) (EVAL). Here we investigated the magnetic properties and heating capacities in an alternating magnetic field (141 kHz, 12 mT) for implants with increasing concentrations of magnetic microparticles. We found that the magnetic properties of the magnetic microparticles were preserved using the formulation and in the wet implant at 37 degrees C, as in vivo. Using two orthogonal methods, a common SLP (20 Wg(-1)) was found after weighting by magnetic microparticle fraction, suggesting that both formulations are able to properly carry the magnetic microparticles in situ while preserving their magnetic properties and heating capacities. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This paper considers the estimation of the geographical scope of industrial location determinants. While previous studies impose strong assumptions on the weighting scheme of the spatial neighbour matrix, we propose a exible parametrisation that allows for di fferent (distance-based) de finitions of neighbourhood and di fferent weights to the neighbours. In particular, we estimate how far can reach indirect marginal e ffects and discuss how to report them. We also show that the use of smooth transition functions provides tools for policy analysis that are not available in the traditional threshold modelling. Keywords: count data models, industrial location, smooth transition functions, threshold models. JEL-Codes: C25, C52, R11, R30.
Resumo:
High Resolution Magic Angle Spinning (HR-MAS) NMR allows metabolic characterization of biopsies. HR-MAS spectra from tissues of most organs show strong lipid contributions that are overlapping metabolite regions, which hamper metabolite estimation. Metabolite quantification and analysis would benefit from a separation of lipids and small metabolites. Generally, a relaxation filter is used to reduce lipid contributions. However, the strong relaxation filter required to eliminate most of the lipids also reduces the signals for small metabolites. The aim of our study was therefore to investigate different diffusion editing techniques in order to employ diffusion differences for separating lipid and small metabolite contributions in the spectra from different organs for unbiased metabonomic analysis. Thus, 1D and 2D diffusion measurements were performed, and pure lipid spectra that were obtained at strong diffusion weighting (DW) were subtracted from those obtained at low DW, which include both small metabolites and lipids. This subtraction yielded almost lipid free small metabolite spectra from muscle tissue. Further improved separation was obtained by combining a 1D diffusion sequence with a T2-filter, with the subtraction method eliminating residual lipids from the spectra. Similar results obtained for biopsies of different organs suggest that this method is applicable in various tissue types. The elimination of lipids from HR-MAS spectra and the resulting less biased assessment of small metabolites have potential to remove ambiguities in the interpretation of metabonomic results. This is demonstrated in a reproducibility study on biopsies from human muscle.
Resumo:
This paper presents a technique to estimate and model patient-specific pulsatility of cerebral aneurysms over onecardiac cycle, using 3D rotational X-ray angiography (3DRA) acquisitions. Aneurysm pulsation is modeled as a time varying-spline tensor field representing the deformation applied to a reference volume image, thus producing the instantaneousmorphology at each time point in the cardiac cycle. The estimated deformation is obtained by matching multiple simulated projections of the deforming volume to their corresponding original projections. A weighting scheme is introduced to account for the relevance of each original projection for the selected time point. The wide coverage of the projections, together with the weighting scheme, ensures motion consistency in all directions. The technique has been tested on digital and physical phantoms that are realistic and clinically relevant in terms of geometry, pulsation and imaging conditions. Results from digital phantomexperiments demonstrate that the proposed technique is able to recover subvoxel pulsation with an error lower than 10% of the maximum pulsation in most cases. The experiments with the physical phantom allowed demonstrating the feasibility of pulsation estimation as well as identifying different pulsation regions under clinical conditions.
Resumo:
To enhance the clinical value of coronary magnetic resonance angiography (MRA), high-relaxivity contrast agents have recently been used at 3T. Here we examine a uniform bilateral shadowing artifact observed along the coronary arteries in MRA images collected using such a contrast agent. Simulations were performed to characterize this artifact, including its origin, to determine how best to mitigate this effect, and to optimize a data acquisition/injection scheme. An intraluminal contrast agent concentration model was used to simulate various acquisition strategies with two profile orders for a slow-infusion of a high-relaxivity contrast agent. Filtering effects from temporally variable weighting in k-space are prominent when a centric, radial (CR) profile order is applied during contrast infusion, resulting in decreased signal enhancement and underestimation of vessel width, while both pre- and postinfusion steady-state acquisitions result in overestimation of the vessel width. Acquisition during the brief postinfusion steady-state produces the greatest signal enhancement and minimizes k-space filtering artifacts.
Resumo:
Biplots are graphical displays of data matrices based on the decomposition of a matrix as the product of two matrices. Elements of these two matrices are used as coordinates for the rows and columns of the data matrix, with an interpretation of the joint presentation that relies on the properties of the scalar product. Because the decomposition is not unique, there are several alternative ways to scale the row and column points of the biplot, which can cause confusion amongst users, especially when software packages are not united in their approach to this issue. We propose a new scaling of the solution, called the standard biplot, which applies equally well to a wide variety of analyses such as correspondence analysis, principal component analysis, log-ratio analysis and the graphical results of a discriminant analysis/MANOVA, in fact to any method based on the singular-value decomposition. The standard biplot also handles data matrices with widely different levels of inherent variance. Two concepts taken from correspondence analysis are important to this idea: the weighting of row and column points, and the contributions made by the points to the solution. In the standard biplot one set of points, usually the rows of the data matrix, optimally represent the positions of the cases or sample units, which are weighted and usually standardized in some way unless the matrix contains values that are comparable in their raw form. The other set of points, usually the columns, is represented in accordance with their contributions to the low-dimensional solution. As for any biplot, the projections of the row points onto vectors defined by the column points approximate the centred and (optionally) standardized data. The method is illustrated with several examples to demonstrate how the standard biplot copes in different situations to give a joint map which needs only one common scale on the principal axes, thus avoiding the problem of enlarging or contracting the scale of one set of points to make the biplot readable. The proposal also solves the problem in correspondence analysis of low-frequency categories that are located on the periphery of the map, giving the false impression that they are important.
Resumo:
In coronary magnetic resonance angiography, a magnetization-preparation scheme for T2 -weighting (T2 Prep) is widely used to enhance contrast between the coronary blood-pool and the myocardium. This prepulse is commonly applied without spatial selection to minimize flow sensitivity, but the nonselective implementation results in a reduced magnetization of the in-flowing blood and a related penalty in signal-to-noise ratio. It is hypothesized that a spatially selective T2 Prep would leave the magnetization of blood outside the T2 Prep volume unaffected and thereby lower the signal-to-noise ratio penalty. To test this hypothesis, a spatially selective T2 Prep was implemented where the user could freely adjust angulation and position of the T2 Prep slab to avoid covering the ventricular blood-pool and saturating the in-flowing spins. A time gap of 150 ms was further added between the T2 Prep and other prepulses to allow for in-flow of a larger volume of unsaturated spins. Consistent with numerical simulation, the spatially selective T2 Prep increased in vivo human coronary artery signal-to-noise ratio (42.3 ± 2.9 vs. 31.4 ± 2.2, n = 22, P < 0.0001) and contrast-to-noise-ratio (18.6 ± 1.5 vs. 13.9 ± 1.2, P = 0.009) as compared to those of the nonselective T2 Prep. Additionally, a segmental analysis demonstrated that the spatially selective T2 Prep was most beneficial in proximal and mid segments where the in-flowing blood volume was largest compared to the distal segments. Magn Reson Med, 2013. © 2012 Wiley Periodicals, Inc.
Resumo:
OBJECTIVE: To determine the usefulness of computed tomography (CT), magnetic resonance imaging (MRI), and Doppler ultrasonography (US) in providing specific images of gouty tophi. METHODS: Four male patients with chronic gout with tophi affecting the knee joints (three cases) or the olecranon processes of the elbows (one case) were assessed. Crystallographic analyses of the synovial fluid or tissue aspirates of the areas of interest were made with polarising light microscopy, alizarin red staining, and x ray diffraction. CT was performed with a GE scanner, MR imaging was obtained with a 1.5 T Magneton (Siemens), and ultrasonography with colour Doppler was carried out by standard technique. RESULTS: Crystallographic analyses showed monosodium urate (MSU) crystals in the specimens of the four patients; hydroxyapatite and calcium pyrophosphate dihydrate (CPPD) crystals were not found. A diffuse soft tissue thickening was seen on plain radiographs but no calcifications or ossifications of the tophi. CT disclosed lesions containing round and oval opacities, with a mean density of about 160 Hounsfield units (HU). With MRI, lesions were of low to intermediate signal intensity on T(1) and T(2) weighting. After contrast injection in two cases, enhancement of the tophus was seen in one. Colour Doppler US showed the tophi to be hypoechogenic with peripheral increase of the blood flow in three cases. CONCLUSION: The MR and colour Doppler US images showed the tophi as masses surrounded by a hypervascular area, which cannot be considered as specific for gout. But on CT images, masses of about 160 HU density were clearly seen, which correspond to MSU crystal deposits.