930 resultados para Linear and nonlinear correlation
Resumo:
The aims of this study were: (1) to correlate surface (SH) and cross-sectional hardness (CSH) with microradiographic parameters of artificial enamel lesions; (2) to compare lesions prepared by different protocols. Fifty bovine enamel specimens were allocated by stratified randomisation according to their initial SH values to five groups and lesions produced by different methods: MC gel (methylcellulose gel/lactic acid, pH 4.6, 14 days); PA gel (polyacrylic acid/lactic acid/hydroxyapatite, pH 4.8, 16 h); MHDP (undersaturated lactate buffer/methyl diphosphonate, pH 5.0, 6 days); buffer (undersaturated acetate buffer/fluoride, pH 5.0, 16 h), and pH cycling (7 days). SH of the lesions (SH(1)) was measured. The specimens were longitudinally sectioned and transverse microradiography (TMR) and CSH measured at 10- to 220-mu m depth from the surface. Overall, there was a medium correlation but non-linear and variable relationship between mineral content and root CSH. root SH(1) was weakly to moderately correlated with surface layer properties, weakly correlated with lesion depth but uncorrelated with integrated mineral loss. MHDP lesions showed the highest subsurface mineral loss, followed by pH cycling, buffer, PA gel and MC gel lesions. The conclusions were: (1) CSH, as an alternative to TMR, does not estimate mineral content very accurately, but gives information about mechanical properties of lesions; (2) SH should not be used to analyse lesions; (3) artificial caries lesions produced by the protocols differ, especially considering the method of analysis. Copyright (C) 2009 S. Karger AG, Basel
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Civil
Resumo:
This paper addresses the use of multidimensional scaling in the evaluation of controller performance. Several nonlinear systems are analyzed based on the closed loop time response under the action of a reference step input signal. Three alternative performance indices, based on the time response, Fourier analysis, and mutual information, are tested. The numerical experiments demonstrate the feasibility of the proposed methodology and motivate its extension for other performance measures and new classes of nonlinearities.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
The theory of fractional calculus goes back to the beginning of thr throry of differential calculus but its inherent complexity postponed the applications of the associated concepts. In the last decade the progress in the areas of chaos and fractals revealed subtle relationships with the fractional calculus leading to an increasing interest in the development of the new paradigm. In the area of automaticcontrol preliminary work has already been carried out but the proposed algorithms are restricted to the frequency domain. The paper discusses the design of fractional-order discrete-time controllers. The algorithms studied adopt the time domein, which makes them suited for z-transform analusis and discrete-time implementation. The performance of discrete-time fractional-order controllers with linear and non-linear systems is also investigated.
Resumo:
Purpose: The studies on links between sustainability, innovation, and competitiveness have been mainly focused at organizational and business level. The purpose of this research is to investigate if there is a correlation between these three variables at country level. Using international well recognized rankings of countries sustainability, innovation, and competitiveness, correlation analysis was performed allowing for the conclusion that there are indeed high correlations (and possible relationships) between the three variables at country level. Design/methodology/approach: Sustainability, innovation, and competitiveness literature were reviewed identifying a lack of studies examining these three variables at country level. Three major well recognized indexes were used to support the quantitative research: The World Economic Forum (2013) Sustainability-adjusted global competitiveness index, the Global Innovation Index (2014) issued by Cornell University, INSEAD, and WIPO and the IMD World Competitiveness Yearbook (2014). After confirming the distributions normality, Pearson correlation analysis was made with results showing high linear correlations between the three indexes. Findings: The results of the correlation analysis using Pearson correlation coefficient (all correlation coefficients are greater than 0.73) give a strong support to the conclusion that there is indeed a high correlation (and a possible relationship) between social sustainability, innovation and competitiveness at country level. Research limitations/implications: Further research is advisable to better understand the factors that contribute to the presented results and to establish a global paradigm linking these three main constructs (social sustainability, innovation, and competitiveness). Some authors consider that these measurements are not fully supported (e.g. due to different countries standards), however, it is assumed these differing underlying methodological approaches, by being used in conjunction, can be considered as a set of reliable and useful performance indicators. Practical implications: The results highlight the simultaneous relationship between social sustainability, innovation and competitiveness superior performance and the need to take that these considerations into business and operating models. Social implications: This research suggests that sustainability and innovation policies, strategies and practices are relevant for countries competitiveness and should be promoted particularly in countries ranked low on sustainability and innovation global scoring indexes. Originality/value: This is one of the few studies addressing the relationships between sustainability, innovation and competitiveness at country level.
Resumo:
This work attempts to establish dermatological identification patterns for Brazilian cnidarian species and a probable correlation with envenoming severity. In an observational prospective study, one hundred and twenty-eight patients from the North Coast region of São Paulo State, Brazil were seen between 2002 and 2008. About 80% of these showed only local effects (erythema, edema, and pain) with small, less than 20 cm, oval or round skin marks and impressions from small tentacles. Approximately 20% of the victims had long, more than 20 cm, linear and crossed marks with frequent systemic phenomena, such as malaise, vomiting, dyspnea, and tachycardia. The former is compatible with the common hydromedusa from Southeast and Southern Brazil (Olindias sambaquiensis). The long linear marks with intense pain and systemic phenomena are compatible with envenoming by the box jellyfish Tamoya haplonema and Chiropsalmus quadrumanus and the hydrozoan Portuguese man-of-war (Physalis physalis). There was an association between skin marks and probable accident etiology. This simple observation rule can be indicative of severity, as the Cubozoa Class (box jellyfish) and Portuguese man-of-war cause the most severe accidents. In such cases, medical attention, including intensive care, is important, as the systemic manifestations can be associated with death.
Resumo:
INTRODUCTION: Antifungal susceptibility testing assists in finding the appropriate treatment for fungal infections, which are increasingly common. However, such testing is not very widespread. There are several existing methods, and the correlation between such methods was evaluated in this study. METHODS: The susceptibility to fluconazole of 35 strains of Candida sp. isolated from blood cultures was evaluated by the following methods: microdilution, Etest, and disk diffusion. RESULTS: The correlation between the methods was around 90%. CONCLUSIONS: The disk diffusion test exhibited a good correlation and can be used in laboratory routines to detect strains of Candida sp. that are resistant to fluconazole.
Resumo:
One of the most popular approaches to path planning and control is the potential field method. This method is particularly attractive because it is suitable for on-line feedback control. In this approach the gradient of a potential field is used to generate the robot's trajectory. Thus, the path is generated by the transient solutions of a dynamical system. On the other hand, in the nonlinear attractor dynamic approach the path is generated by a sequence of attractor solutions. This way the transient solutions of the potential field method are replaced by a sequence of attractor solutions (i.e., asymptotically stable states) of a dynamical system. We discuss at a theoretical level some of the main differences of these two approaches.
Resumo:
Abstract Background: Blood pressure is directly related to body mass index, and individuals with increased waist circumference have higher risk of developing hypertension, insulin resistance, and other metabolic changes, since adolescence. Objective: to evaluate the correlation of blood pressure with insulin resistance, waist circumference and body mass index in adolescents. Methods: Cross-section study on a representative sample of adolescent students. One group of adolescents with altered blood pressure detected by casual blood pressure and/or home blood pressure monitoring (blood pressure > 90th percentile) and one group of normotensive adolescents were studied. Body mass index, waist circumference were measured, and fasting glucose and plasma insulin levels were determined, using the HOMA-IR index to identify insulin resistance. Results: A total of 162 adolescents (35 with normal blood pressure and 127 with altered blood pressure) were studied; 61% (n = 99) of them were boys and the mean age was 14.9 ± 1.62 years. Thirty-eight (23.5%) adolescents had altered HOMA-IR. The group with altered blood pressure had higher values of waist circumference, body mass index and HOMA-IR (p<0.05). Waist circumference was higher among boys in both groups (p<0.05) and girls with altered blood pressure had higher HOMA-IR than boys (p<0.05). There was a significant moderate correlation between body mass index and HOMA-IR in the group with altered blood pressure (ρ = 0.394; p < 0.001), and such correlation was stronger than in the normotensive group. There was also a significant moderate correlation between waist circumference and HOMA-IR in both groups (ρ = 0.345; p < 0.05). Logistic regression showed that HOMA-IR was as predictor of altered blood pressure (odds ratio - OR = 2.0; p = 0.001). Conclusion: There was a significant association of insulin resistance with blood pressure and the impact of insulin resistance on blood pressure since childhood. The correlation and association between markers of cardiovascular diseases was more pronounced in adolescents with altered blood pressure, suggesting that primary prevention strategies for cardiovascular risk factors should be early implemented in childhood and adolescence.
Resumo:
Despite advances in the diagnosisand treatment of head and neck cancer,survival rates have not improvedover recent years. New therapeuticstrategies, including immunotherapy,are the subject of extensive research.In several types of tumors, the presenceof tumor infiltrating lymphocytes(TILs), notably CD8+ T cellsand dendritic cells, has been correlatedwith improved prognosis. Moreover,some T cells among TILs havebeen shown to kill tumor cells in vitroupon recognition of tumor-associatedantigens. Tumor associated antigensare expressed in a significant proportionof squamous cell carcinoma ofthe head and neck and apparently mayplay a role in the regulation of cancercell growth notably by inhibition ofp53 protein function in some cancers.The MAGE family CT antigens couldtherefore potentially be used as definedtargets for immunotherapy andtheir study bring new insight in tumorgrowth regulation mechanisms. Between1995 - 2005 54 patients weretreated surgically in our institution forsquamous cell carcinoma of the oralcavity. Patient and clinical data wasobtained from patient files and collectedinto a computerized database.For each patient, paraffin embeddedtumor specimens were retrieved andexpression of MAGE CT antigens,p53, NY-OESO-1 were analyzed byimmunohistochemistry. Results werethen correlated with histopathologicalparameter such as tumor depth,front invasion according to Bryne andboth, local control and disease freesurvival. MAGE-A was expressed in52% of patients. NY-ESO-1 and p53expression was found in 7% and 52%cases respectively. A higher tumordepth was significantly correlatedwith expression of MAGE-Aproteins(p = 0.03). No significant correlationcould be made between the expressionof both p53 andNY-OESO-1 andhistopathological parameters. Expressionof tumor-associated antigendid not seem to impact significantlyon patient prognosis. As does thedemonstration of p53 function inhibitionby CT antigens of MAGE family,our results suggest, that tumor associatedantigens may be implicated in tumorprogression mechanisms. Thishypothesis need further investigationto clarify the relationship betweenhost immune response and local tumorbiology.
Resumo:
BACKGROUND: We sought to improve upon previously published statistical modeling strategies for binary classification of dyslipidemia for general population screening purposes based on the waist-to-hip circumference ratio and body mass index anthropometric measurements. METHODS: Study subjects were participants in WHO-MONICA population-based surveys conducted in two Swiss regions. Outcome variables were based on the total serum cholesterol to high density lipoprotein cholesterol ratio. The other potential predictor variables were gender, age, current cigarette smoking, and hypertension. The models investigated were: (i) linear regression; (ii) logistic classification; (iii) regression trees; (iv) classification trees (iii and iv are collectively known as "CART"). Binary classification performance of the region-specific models was externally validated by classifying the subjects from the other region. RESULTS: Waist-to-hip circumference ratio and body mass index remained modest predictors of dyslipidemia. Correct classification rates for all models were 60-80%, with marked gender differences. Gender-specific models provided only small gains in classification. The external validations provided assurance about the stability of the models. CONCLUSIONS: There were no striking differences between either the algebraic (i, ii) vs. non-algebraic (iii, iv), or the regression (i, iii) vs. classification (ii, iv) modeling approaches. Anticipated advantages of the CART vs. simple additive linear and logistic models were less than expected in this particular application with a relatively small set of predictor variables. CART models may be more useful when considering main effects and interactions between larger sets of predictor variables.
Resumo:
ELISA in situ can be used to titrate hepatitis A virus (HAV) particles and real-time polymerase chain reaction (RT-PCR) has been shown to be a fast method to quantify the HAV genome. Precise quantification of viral concentration is necessary to distinguish between infectious and non-infectious particles. The purpose of this study was to compare cell culture and RT-PCR quantification results and determine whether HAV genome quantification can be correlated with infectivity. For this purpose, three stocks of undiluted, five-fold diluted and 10-fold diluted HAV were prepared to inoculate cells in a 96-well plate. Monolayers were then incubated for seven, 10 and 14 days and the correlation between the ELISA in situ and RT-PCR results was evaluated. At 10 days post-incubation, the highest viral load was observed in all stocks of HAV via RT-PCR (10(5) copies/mL) (p = 0.0002), while ELISA revealed the highest quantity of particles after 14 days (optical density = 0.24, p < 0.001). At seven days post-infection, there was a significant statistical correlation between the results of the two methods, indicating equivalents titres of particles and HAV genome during this period of infection. The results reported here indicate that the duration of growth of HAV in cell culture must be taken into account to correlate genome quantification with infectivity.
Resumo:
Large projects evaluation rises well known difficulties because -by definition- they modify the current price system; their public evaluation presents additional difficulties because they modify too existing shadow prices without the project. This paper analyzes -first- the basic methodologies applied until late 80s., based on the integration of projects in optimization models or, alternatively, based on iterative procedures with information exchange between two organizational levels. New methodologies applied afterwards are based on variational inequalities, bilevel programming and linear or nonlinear complementarity. Their foundations and different applications related with project evaluation are explored. As a matter of fact, these new tools are closely related among them and can treat more complex cases involving -for example- the reaction of agents to policies or the existence of multiple agents in an environment characterized by common functions representing demands or constraints on polluting emissions.