985 resultados para Conjugate gradient methods
Resumo:
Detailed knowledge on water percolation into the soil in irrigated areas is fundamental for solving problems of drainage, pollution and the recharge of underground aquifers. The aim of this study was to evaluate the percolation estimated by time-domain-reflectometry (TDR) in a drainage lysimeter. We used Darcy's law with K(θ) functions determined by field and laboratory methods and by the change in water storage in the soil profile at 16 points of moisture measurement at different time intervals. A sandy clay soil was saturated and covered with plastic sheet to prevent evaporation and an internal drainage trial in a drainage lysimeter was installed. The relationship between the observed and estimated percolation values was evaluated by linear regression analysis. The results suggest that percolation in the field or laboratory can be estimated based on continuous monitoring with TDR, and at short time intervals, of the variations in soil water storage. The precision and accuracy of this approach are similar to those of the lysimeter and it has advantages over the other evaluated methods, of which the most relevant are the possibility of estimating percolation in short time intervals and exemption from the predetermination of soil hydraulic properties such as water retention and hydraulic conductivity. The estimates obtained by the Darcy-Buckingham equation for percolation levels using function K(θ) predicted by the method of Hillel et al. (1972) provided compatible water percolation estimates with those obtained in the lysimeter at time intervals greater than 1 h. The methods of Libardi et al. (1980), Sisson et al. (1980) and van Genuchten (1980) underestimated water percolation.
Resumo:
One of the most important problems in optical pattern recognition by correlation is the appearance of sidelobes in the correlation plane, which causes false alarms. We present a method that eliminate sidelobes of up to a given height if certain conditions are satisfied. The method can be applied to any generalized synthetic discriminant function filter and is capable of rejecting lateral peaks that are even higher than the central correlation. Satisfactory results were obtained in both computer simulations and optical implementation.
Resumo:
Obstructive disease of the large coronary arteries is the prominent cause for angina pectoris. However, angina may also occur in the absence of significant coronary atherosclerosis or coronary artery spasm, especially in women. Myocardial ischaemia in these patients is often associated with abnormalities of the coronary microcirculation and may thus represent a manifestation of coronary microvascular disease (CMD). Elucidation of the role of the microvasculature in the genesis of myocardial ischaemia and cardiac damage-in the presence or absence of obstructive coronary atherosclerosis-will certainly result in more rational diagnostic and therapeutic interventions for patients with ischaemic heart disease. Specifically targeted research based on improved assessment modalities is needed to improve the diagnosis of CMD and to translate current molecular, cellular, and physiological knowledge into new therapeutic options.
Resumo:
There are currently many devices and techniques to quantify trace elements (TEs) in various matrices, but their efficacy is dependent on the digestion methods (DMs) employed in the opening of such matrices which, although "organic", present inorganic components which are difficult to solubilize. This study was carried out to evaluate the recovery of Fe, Zn, Cr, Ni, Cd and Pb contents in samples of composts and cattle, horse, chicken, quail, and swine manures, as well as in sewage sludges and peat. The DMs employed were acid digestion in microwaves with HNO3 (EPA 3051A); nitric-perchloric digestion with HNO3 + HClO4 in a digestion block (NP); dry ashing in a muffle furnace and solubilization of residual ash in nitric acid (MDA); digestion by using aqua regia solution (HCl:HNO3) in the digestion block (AR); and acid digestion with HCl and HNO3 + H2O2 (EPA 3050). The dry ashing method led to the greatest recovery of Cd in organic residues, but the EPA 3050 protocol can be an alternative method for the same purpose. The dry ashing should not be employed to determine the concentration of Cr, Fe, Ni, Pb and Zn in the residues. Higher Cr and Fe contents are recovered when NP and EPA 3050 are employed in the opening of organic matrices. For most of the residues analyzed, AR is the most effective method for recovering Ni. Microwave-assisted digestion methods (EPA3051 and 3050) led to the highest recovery of Pb. The choice of the DM that provides maximum recovery of Zn depends on the organic residue and trace element analyzed.
Resumo:
The plant-available water capacity of the soil is defined as the water content between field capacity and wilting point, and has wide practical application in planning the land use. In a representative profile of the Cerrado Oxisol, methods for estimating the wilting point were studied and compared, using a WP4-T psychrometer and Richards chamber for undisturbed and disturbed samples. In addition, the field capacity was estimated by the water content at 6, 10, 33 kPa and by the inflection point of the water retention curve, calculated by the van Genuchten and cubic polynomial models. We found that the field capacity moisture determined at the inflection point was higher than by the other methods, and that even at the inflection point the estimates differed, according to the model used. By the WP4-T psychrometer, the water content was significantly lower found the estimate of the permanent wilting point. We concluded that the estimation of the available water holding capacity is markedly influenced by the estimation methods, which has to be taken into consideration because of the practical importance of this parameter.
Resumo:
The lack of a standard method to regulate heavy metal determination in Brazilian fertilizers and the subsequent use of several digestion methods have produced variations in the results, hampering interpretation. Thus, the aim of this study was to compare the effectiveness of three digestion methods for determination of metals such as Cd, Ni, Pb, and Cr in fertilizers. Samples of 45 fertilizers marketed in northeastern Brazil were used. A fertilizer sample with heavy metal contents certified by the US National Institute of Standards and Technology (NIST) was used as control. The following fertilizers were tested: rock phosphate; organo-mineral fertilizer with rock phosphate; single superphosphate; triple superphosphate; mixed N-P-K fertilizer; and fertilizer with micronutrients. The substances were digested according to the method recommended by the Ministry for Agriculture, Livestock and Supply of Brazil (MAPA) and by the two methods 3051A and 3052 of the United States Environmental Protection Agency (USEPA). By the USEPA method 3052, higher portions of the less soluble metals such as Ni and Pb were recovered, indicating that the conventional digestion methods for fertilizers underestimate the total amount of these elements. The results of the USEPA method 3051A were very similar to those of the method currently used in Brazil (Brasil, 2006). The latter is preferable, in view of the lower cost requirement for acids, a shorter digestion period and greater reproducibility.
Resumo:
Drug-eluting microspheres are used for embolization of hypervascular tumors and allow for local controlled drug release. Although the drug release from the microspheres relies on fast ion-exchange, so far only slow-releasing in vitro dissolution methods have been correlated to in vivo data. Three in vitro release methods are assessed in this study for their potential to predict slow in vivo release of sunitinib from chemoembolization spheres to the plasma, and fast local in vivo release obtained in an earlier study in rabbits. Release in an orbital shaker was slow (t50%=4.5h, 84% release) compared to fast release in USP 4 flow-through implant cells (t50%=1h, 100% release). Sunitinib release in saline from microspheres enclosed in dialysis inserts was prolonged and incomplete (t50%=9 days, 68% release) due to low drug diffusion through the dialysis membrane. The slow-release profile fitted best to low sunitinib plasma AUC following injection of sunitinib-eluting spheres. Although limited by lack of standardization, release in the orbital shaker fitted best to local in vivo sunitinib concentrations. Drug release in USP flow-through implant cells was too fast to correlate with local concentrations, although this method is preferred to discriminate between different sphere types.
Resumo:
The concept of early detection to then intervene and improve the prognostic seems straightforward. Applied to asymptomatic subjects, this concept--screening--is rather complex. This review presents the rational and fundamental principles of screening. It underscores the fundamental principles related to the disease and to the screening test considered, the importance of considering screening as a program rather than a test only, and the validity of measures used to evaluate the efficacy of screening. Lastly, it reviews the most frequently bias encountered in screening studies and interpretations.
Resumo:
Abstract Traditionally, the common reserving methods used by the non-life actuaries are based on the assumption that future claims are going to behave in the same way as they did in the past. There are two main sources of variability in the processus of development of the claims: the variability of the speed with which the claims are settled and the variability between the severity of the claims from different accident years. High changes in these processes will generate distortions in the estimation of the claims reserves. The main objective of this thesis is to provide an indicator which firstly identifies and quantifies these two influences and secondly to determine which model is adequate for a specific situation. Two stochastic models were analysed and the predictive distributions of the future claims were obtained. The main advantage of the stochastic models is that they provide measures of variability of the reserves estimates. The first model (PDM) combines one conjugate family Dirichlet - Multinomial with the Poisson distribution. The second model (NBDM) improves the first one by combining two conjugate families Poisson -Gamma (for distribution of the ultimate amounts) and Dirichlet Multinomial (for distribution of the incremental claims payments). It was found that the second model allows to find the speed variability in the reporting process and development of the claims severity as function of two above mentioned distributions' parameters. These are the shape parameter of the Gamma distribution and the Dirichlet parameter. Depending on the relation between them we can decide on the adequacy of the claims reserve estimation method. The parameters have been estimated by the Methods of Moments and Maximum Likelihood. The results were tested using chosen simulation data and then using real data originating from the three lines of business: Property/Casualty, General Liability, and Accident Insurance. These data include different developments and specificities. The outcome of the thesis shows that when the Dirichlet parameter is greater than the shape parameter of the Gamma, resulting in a model with positive correlation between the past and future claims payments, suggests the Chain-Ladder method as appropriate for the claims reserve estimation. In terms of claims reserves, if the cumulated payments are high the positive correlation will imply high expectations for the future payments resulting in high claims reserves estimates. The negative correlation appears when the Dirichlet parameter is lower than the shape parameter of the Gamma, meaning low expected future payments for the same high observed cumulated payments. This corresponds to the situation when claims are reported rapidly and fewer claims remain expected subsequently. The extreme case appears in the situation when all claims are reported at the same time leading to expectations for the future payments of zero or equal to the aggregated amount of the ultimate paid claims. For this latter case, the Chain-Ladder is not recommended.
Resumo:
BACKGROUND: In May 2010, Switzerland introduced a heterogeneous smoking ban in the hospitality sector. While the law leaves room for exceptions in some cantons, it is comprehensive in others. This longitudinal study uses different measurement methods to examine airborne nicotine levels in hospitality venues and the level of personal exposure of non-smoking hospitality workers before and after implementation of the law. METHODS: Personal exposure to second hand smoke (SHS) was measured by three different methods. We compared a passive sampler called MoNIC (Monitor of NICotine) badge, to salivary cotinine and nicotine concentration as well as questionnaire data. Badges allowed the number of passively smoked cigarettes to be estimated. They were placed at the venues as well as distributed to the participants for personal measurements. To assess personal exposure at work, a time-weighted average of the workplace badge measurements was calculated. RESULTS: Prior to the ban, smoke-exposed hospitality venues yielded a mean badge value of 4.48 (95%-CI: 3.7 to 5.25; n = 214) cigarette equivalents/day. At follow-up, measurements in venues that had implemented a smoking ban significantly declined to an average of 0.31 (0.17 to 0.45; n = 37) (p = 0.001). Personal badge measurements also significantly decreased from an average of 2.18 (1.31-3.05 n = 53) to 0.25 (0.13-0.36; n = 41) (p = 0.001). Spearman rank correlations between badge exposure measures and salivary measures were small to moderate (0.3 at maximum). CONCLUSIONS: Nicotine levels significantly decreased in all types of hospitality venues after implementation of the smoking ban. In-depth analyses demonstrated that a time-weighted average of the workplace badge measurements represented typical personal SHS exposure at work more reliably than personal exposure measures such as salivary cotinine and nicotine.
Resumo:
BACKGROUND: Chronic mountain sickness (CMS) is an important public health problem and is characterized by exaggerated hypoxemia, erythrocytosis, and pulmonary hypertension. While pulmonary hypertension is a leading cause of morbidity and mortality in patients with CMS, it is relatively mild and its underlying mechanisms are not known. We speculated that during mild exercise associated with daily activities, pulmonary hypertension in CMS is much more pronounced. METHODS: We estimated pulmonary artery pressure by using echocardiography at rest and during mild bicycle exercise at 50 W in 30 male patients with CMS and 32 age-matched, healthy control subjects who were born and living at an altitude of 3,600 m. RESULTS: The modest, albeit significant difference of the systolic right-ventricular-to-right-atrial pressure gradient between patients with CMS and controls at rest (30.3 +/- 8.0 vs 25.4 +/- 4.5 mm Hg, P 5 .002) became more than three times larger during mild bicycle exercise (56.4 +/- 19.0 vs 39.8 +/- 8.0 mm Hg, P < .001). CONCLUSIONS: Measurements of pulmonary artery pressure at rest greatly underestimate pulmonary artery pressure during daily activity in patients with CMS. The marked pulmonary hypertension during mild exercise associated with daily activity may explain why this problem is a leading cause of morbidity and mortality in patients with CMS.
Resumo:
PURPOSE: To investigate the ability of fibroblast growth factor (FGF) 2-saporin to prevent lens regrowth in the rabbit. METHODS: Chemically conjugated and genetically fused FGF2-saporin (made in Escherichia coli) were used. Extracapsular extraction of the lens was performed on the rabbit, and the cytotoxin either was injected directly into the capsule bag or was administered by FGF2-saporin-coated, heparin surface-modified (HSM) polymethylmethacrylate intraocular lenses. The potential of the conjugate was checked by slit lamp evaluation of capsular opacification and by measuring crystallin synthesis. Toxin diffusion and sites of toxin binding were assessed by immunohistochemistry. Possible toxicity was determined by histologic analysis of ocular tissues. RESULTS: FGF2-saporin effectively inhibited lens regrowth when it was injected directly into the capsular bag. However, high concentration of the toxin induced transient corneal edema and loss of pigment in the iris. Intraocular lenses coated with FGF2-saporin reduced lens regrowth and crystallin synthesis without any detectable clinical side effect. After implantation, FGF2-saporin was shown to have bound to the capsules and, to a lesser extent, to the iris; no histologic damage was found on ocular tissues as a result of implantation of drug-loaded HSM intraocular lenses. CONCLUSIONS: Chemically conjugated (FGF2-SAP) and genetically fused FGF2-saporin (rFGF2-SAP) bound to HSM intraocular lenses can prevent lens regrowth in the rabbit.
Resumo:
Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.