948 resultados para best estimate method
Resumo:
This study compared an enzyme-linked immunosorbent assay (ELISA) to a liquid chromatography-tandem mass spectrometry (LC/MS/MS) technique for measurement of tacrolimus concentrations in adult kidney and liver transplant recipients, and investigated how assay choice influenced pharmacokinetic parameter estimates and drug dosage decisions. Tacrolimus concentrations measured by both ELISA and LC/MS/MS from 29 kidney (n = 98 samples) and 27 liver (n = 97 samples) transplant recipients were used to evaluate the performance of these methods in the clinical setting. Tacrolimus concentrations measured by the two techniques were compared via regression analysis. Population pharmacokinetic models were developed independently using ELISA and LC/MS/MS data from 76 kidney recipients. Derived kinetic parameters were used to formulate typical dosing regimens for concentration targeting. Dosage recommendations for the two assays were compared. The relation between LC/MS/MS and ELISA measurements was best described by the regression equation ELISA = 1.02 . (LC/MS/MS) + 0.14 in kidney recipients, and ELISA = 1.12 . (LC/MS/MS) - 0.87 in liver recipients. ELISA displayed less accuracy than LC/MS/MS at lower tacrolimus concentrations. Population pharmacokinetic models based on ELISA and LC/MS/MS data were similar with residual random errors of 4.1 ng/mL and 3.7 ng/mL, respectively. Assay choice gave rise to dosage prediction differences ranging from 0% to 30%. ELISA measurements of tacrolimus are not automatically interchangeable with LC/MS/MS values. Assay differences were greatest in adult liver recipients, probably reflecting periods of liver dysfunction and impaired biliary secretion of metabolites. While the majority of data collected in this study suggested assay differences in adult kidney recipients were minimal, findings of ELISA dosage underpredictions of up to 25% in the long term must be investigated further.
Resumo:
We investigate spectral functions extracted using the maximum entropy method from correlators measured in lattice simulations of the (2+1)-dimensional four-fermion model. This model is particularly interesting because it has both a chirally broken phase with a rich spectrum of mesonic bound states and a symmetric phase where there are only resonances. In the broken phase we study the elementary fermion, pion, sigma, and massive pseudoscalar meson; our results confirm the Goldstone nature of the π and permit an estimate of the meson binding energy. We have, however, seen no signal of σ→ππ decay as the chiral limit is approached. In the symmetric phase we observe a resonance of nonzero width in qualitative agreement with analytic expectations; in addition the ultraviolet behavior of the spectral functions is consistent with the large nonperturbative anomalous dimension for fermion composite operators expected in this model.
Resumo:
Introduction Bioelectrical impedance analysis (BIA) is a useful field measure to estimate total body water (TBW). No prediction formulae have been developed or validated against a reference method in patients with pancreatic cancer. The aim of this study was to assess the agreement between three prediction equations for the estimation of TBW in cachectic patients with pancreatic cancer. Methods Resistance was measured at frequencies of 50 and 200 kHz in 18 outpatients (10 males and eight females, age 70.2 +/- 11.8 years) with pancreatic cancer from two tertiary Australian hospitals. Three published prediction formulae were used to calculate TBW - TBWs developed in surgical patients, TBWca-uw and TBWca-nw developed in underweight and normal weight patients with end-stage cancer. Results There was no significant difference in the TBW estimated by the three prediction equations - TBWs 32.9 +/- 8.3 L, TBWca-nw 36.3 +/- 7.4 L, TBWca-uw 34.6 +/- 7.6 L. At a population level, there is agreement between prediction of TBW in patients with pancreatic cancer estimated from the three equations. The best combination of low bias and narrow limits of agreement was observed when TBW was estimated from the equation developed in the underweight cancer patients relative to the normal weight cancer patients. When no established BIA prediction equation exists, practitioners should utilize an equation developed in a population with similar critical characteristics such as diagnosis, weight loss, body mass index and/or age. Conclusions Further research is required to determine the accuracy of the BIA prediction technique against a reference method in patients with pancreatic cancer.
Resumo:
Electricity markets are complex environments with very particular characteristics. MASCEM is a market simulator developed to allow deep studies of the interactions between the players that take part in the electricity market negotiations. This paper presents a new proposal for the definition of MASCEM players’ strategies to negotiate in the market. The proposed methodology is multiagent based, using reinforcement learning algorithms to provide players with the capabilities to perceive the changes in the environment, while adapting their bids formulation according to their needs, using a set of different techniques that are at their disposal. Each agent has the knowledge about a different method for defining a strategy for playing in the market, the main agent chooses the best among all those, and provides it to the market player that requests, to be used in the market. This paper also presents a methodology to manage the efficiency/effectiveness balance of this method, to guarantee that the degradation of the simulator processing times takes the correct measure.
Resumo:
A crucial method for investigating patients with coronary artery disease (CAD) is the calculation of the left ventricular ejection fraction (LVEF). It is, consequently, imperative to precisely estimate the value of LVEF--a process that can be done with myocardial perfusion scintigraphy. Therefore, the present study aimed to establish and compare the estimation performance of the quantitative parameters of the reconstruction methods filtered backprojection (FBP) and ordered-subset expectation maximization (OSEM). Methods: A beating-heart phantom with known values of end-diastolic volume, end-systolic volume, and LVEF was used. Quantitative gated SPECT/quantitative perfusion SPECT software was used to obtain these quantitative parameters in a semiautomatic mode. The Butterworth filter was used in FBP, with the cutoff frequencies between 0.2 and 0.8 cycles per pixel combined with the orders of 5, 10, 15, and 20. Sixty-three reconstructions were performed using 2, 4, 6, 8, 10, 12, and 16 OSEM subsets, combined with several iterations: 2, 4, 6, 8, 10, 12, 16, 32, and 64. Results: With FBP, the values of end-diastolic, end-systolic, and the stroke volumes rise as the cutoff frequency increases, whereas the value of LVEF diminishes. This same pattern is verified with the OSEM reconstruction. However, with OSEM there is a more precise estimation of the quantitative parameters, especially with the combinations 2 iterations × 10 subsets and 2 iterations × 12 subsets. Conclusion: The OSEM reconstruction presents better estimations of the quantitative parameters than does FBP. This study recommends the use of 2 iterations with 10 or 12 subsets for OSEM and a cutoff frequency of 0.5 cycles per pixel with the orders 5, 10, or 15 for FBP as the best estimations for the left ventricular volumes and ejection fraction quantification in myocardial perfusion scintigraphy.
Resumo:
The conventional methods used to evaluate chitin content in fungi, such as biochemical assessment of glucosamine release after acid hydrolysis or epifluorescence microscopy, are low throughput, laborious, time-consuming, and cannot evaluate a large number of cells. We developed a flow cytometric assay, efficient, and fast, based on Calcofluor White staining to measure chitin content in yeast cells. A staining index was defined, its value was directly related to chitin amount and taking into consideration the different levels of autofluorecence. Twenty-two Candida spp. and four Cryptococcus neoformans clinical isolates with distinct susceptibility profiles to caspofungin were evaluated. Candida albicans clinical isolate SC5314, and isogenic strains with deletions in chitin synthase 3 (chs3Δ/chs3Δ) and genes encoding predicted Glycosyl Phosphatidyl Inositol (GPI)-anchored proteins (pga31Δ/Δ and pga62Δ/Δ), were used as controls. As expected, the wild-type strain displayed a significant higher chitin content (P < 0.001) than chs3Δ/chs3Δ and pga31Δ/Δ especially in the presence of caspofungin. Ca. parapsilosis, Ca. tropicalis, and Ca. albicans showed higher cell wall chitin content. Although no relationship between chitin content and antifungal drug susceptibility phenotype was found, an association was established between the paradoxical growth effect in the presence of high caspofungin concentrations and the chitin content. This novel flow cytometry protocol revealed to be a simple and reliable assay to estimate cell wall chitin content of fungi.
Resumo:
This paper aims to present a contrastive approach between three different ways of building concepts after proving the similar syntactic possibilities that coexist in terms. However, from the semantic point of view we can see that each language family has a different distribution in meaning. But the most important point we try to show is that the differences found in the psychological process when communicating concepts should guide the translator and the terminologist in the target text production and the terminology planning process. Differences between languages in the information transmission process are due to the different roles the different types of knowledge play. We distinguish here the analytic-descriptive knowledge and the analogical knowledge among others. We also state that none of them is the best when determining the correctness of a term, but there has to be adequacy criteria in the selection process. This concept building or term building success is important when looking at the linguistic map of the information society.
Resumo:
The objective of the study was to develop regression models to describe the epidemiological profile of dental caries in 12-year-old children in an area of low prevalence of caries. Two distinct random probabilistic samples of schoolchildren (n=1,763) attending public and private schools in Piracicaba, Southeastern Brazil, were studied. Regression models were estimated as a function of the most affected teeth using data collected in 2005 and were validated using a 2001 database. The mean (SD) DMFT index was 1.7 (2.08) in 2001 and the regression equations estimated a DMFT index of 1.67 (1.98), which corresponds to 98.2% of the DMFT index in 2001. The study provided detailed data on the caries profile in 12-year-old children by using an updated analytical approach. Regression models can be an accurate and feasible method that can provide valuable information for the planning and evaluation of oral health services.
Resumo:
Dissertação de Mestrado, Estudos Integrados dos Oceanos, 15 de Março de 2016, Universidade dos Açores.
Resumo:
Void formation during the injection phase of the liquid composite molding process can be explained as a consequence of the non-uniformity of the flow front progression. This is due to the dual porosity within the fiber perform (spacing between the fiber tows is much larger than between the fibers within in a tow) and therefore the best explanation can be provided by a mesolevel analysis, where the characteristic dimension is given by the fiber tow diameter of the order of millimeters. In mesolevel analysis, liquid impregnation along two different scales; inside fiber tows and within the open spaces between the fiber tows must be considered and the coupling between the flow regimes must be addressed. In such cases, it is extremely important to account correctly for the surface tension effects, which can be modeled as capillary pressure applied at the flow front. Numerical implementation of such boundary conditions leads to illposing of the problem, in terms of the weak classical as well as stabilized formulation. As a consequence, there is an error in mass conservation accumulated especially along the free flow front. A numerical procedure was formulated and is implemented in an existing Free Boundary Program to reduce this error significantly.
Resumo:
Reporter genes are routinely used in every laboratory for molecular and cellular biology for studying heterologous gene expression and general cellular biological mechanisms, such as transfection processes. Although well characterized and broadly implemented, reporter genes present serious limitations, either by involving time-consuming procedures or by presenting possible side effects on the expression of the heterologous gene or even in the general cellular metabolism. Fourier transform mid-infrared (FT-MIR) spectroscopy was evaluated to simultaneously analyze in a rapid (minutes) and high-throughput mode (using 96-wells microplates), the transfection efficiency, and the effect of the transfection process on the host cell biochemical composition and metabolism. Semi-adherent HEK and adherent AGS cell lines, transfected with the plasmid pVAX-GFP using Lipofectamine, were used as model systems. Good partial least squares (PLS) models were built to estimate the transfection efficiency, either considering each cell line independently (R 2 ≥ 0.92; RMSECV ≤ 2 %) or simultaneously considering both cell lines (R 2 = 0.90; RMSECV = 2 %). Additionally, the effect of the transfection process on the HEK cell biochemical and metabolic features could be evaluated directly from the FT-IR spectra. Due to the high sensitivity of the technique, it was also possible to discriminate the effect of the transfection process from the transfection reagent on KEK cells, e.g., by the analysis of spectral biomarkers and biochemical and metabolic features. The present results are far beyond what any reporter gene assay or other specific probe can offer for these purposes.
Resumo:
Dissertação para a obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Energia
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
Nesta dissertação pretendeu-se estudar a viabilidade do uso de eletrodiálise com membranas bipolares (BM) na recuperação de ácido clorídrico e de hidróxido de sódio a partir de um efluente industrial que contém 1.4 mol/L de cloreto de sódio. Estas membranas mostraram ser uma ferramenta eficiente para a produção de ácidos e bases a partir do respetivo sal. Foi feita uma seleção de diferentes membranas bipolares (Neosepta, Fumatech e PCA) e aniónicas (PC-SA e PC-ACID 60) na tentativa de encontrar a combinação mais adequada para o tratamento do efluente. Dependendo do critério, o melhor arranjo de membranas é o uso de PC-ACID 60 (membrana aniónica), PC-SK (membrana catiónica) e membranas bipolares do tipo Neosepta para maior pureza dos produtos; membranas bipolares Fumatech para maior eficiência de dessalinização e membranas bipolares PCA para um maior grau de dessalinização. Tecnologicamente foi possível obter uma dessalinização de 99.8% em quatro horas de funcionamento em modo batch com recirculação de todas as correntes. Independentemente da combinação usada é recomendável que o processo seja parado quando a densidade de corrente deixa de ser máxima, 781 A/m2. Assim é possível evitar o aumento de impurezas nos produtos, contra difusão, descida instantânea do pH e uma dessalinização pouco eficiente. A nível piloto o principal fornecedor de membranas e unidade de tratamento “stack” é a marca alemã PCA. Sendo assim realizaram-se ensaios de repetibilidade, contra difusão, avaliação económica e upscaling utilizando as membranas bipolares PCA. A nível económico estudou-se o uso de dois tipos de unidades de tratamento; EDQ 380 e EDQ 1600, para diferentes níveis de dessalinização (50, 75 e 80%). Tendo em conta a otimização económica, é recomendável uma dessalinização máxima de 80%, uma vez que a eficiência de processo a este ponto é de 40%. A aplicação do método com a unidade EDQ 1600 para uma dessalinização de 50% é a mais vantajosa economicamente, com custos de 16 €/m3 de efluente tratado ou 0,78 €/kg Cl- removido. O número de unidades necessárias é 4 posicionados em série.
Resumo:
Sulfadiazine is an antibiotic of the sulfonamide group and is used as a veterinary drug in fish farming. Monitoring it in the tanks is fundamental to control the applied doses and avoid environmental dissemination. Pursuing this goal, we included a novel potentiometric design in a flow-injection assembly. The electrode body was a stainless steel needle veterinary syringe of 0.8-mm inner diameter. A selective membrane of PVC acted as a sensory surface. Its composition, the length of the electrode, and other flow variables were optimized. The best performance was obtained for sensors of 1.5-cm length and a membrane composition of 33% PVC, 66% onitrophenyloctyl ether, 1% ion exchanger, and a small amount of a cationic additive. It exhibited Nernstian slopes of 61.0 mV decade-1 down to 1.0×10-5 mol L-1, with a limit of detection of 3.1×10-6 mol L-1 in flowing media. All necessary pH/ionic strength adjustments were performed online by merging the sample plug with a buffer carrier of 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid, pH 4.9. The sensor exhibited the advantages of a fast response time (less than 15 s), long operational lifetime (60 days), and good selectivity for chloride, nitrite, acetate, tartrate, citrate, and ascorbate. The flow setup was successfully applied to the analysis of aquaculture waters. The analytical results were validated against those obtained with liquid chromatography–tandem mass spectrometry procedures. The sampling rate was about 84 samples per hour and recoveries ranged from 95.9 to 106.9%.