964 resultados para Gaussian curve
Resumo:
Abstract Objective: To assess the cutoff values established by ROC curves to classify18F-NaF uptake as normal or malignant. Materials and Methods: PET/CT images were acquired 1 hour after administration of 185 MBq of18F-NaF. Volumes of interest (VOIs) were drawn on three regions of the skeleton as follows: proximal right humerus diaphysis (HD), proximal right femoral diaphysis (FD) and first vertebral body (VB1), in a total of 254 patients, totalling 762 VOIs. The uptake in the VOIs was classified as normal or malignant on the basis of the radiopharmaceutical distribution pattern and of the CT images. A total of 675 volumes were classified as normal and 52 were classified as malignant. Thirty-five VOIs classified as indeterminate or nonmalignant lesions were excluded from analysis. The standardized uptake value (SUV) measured on the VOIs were plotted on an ROC curve for each one of the three regions. The area under the ROC (AUC) as well as the best cutoff SUVs to classify the VOIs were calculated. The best cutoff values were established as the ones with higher result of the sum of sensitivity and specificity. Results: The AUCs were 0.933, 0.889 and 0.975 for UD, FD and VB1, respectively. The best SUV cutoffs were 9.0 (sensitivity: 73%; specificity: 99%), 8.4 (sensitivity: 79%; specificity: 94%) and 21.0 (sensitivity: 93%; specificity: 95%) for UD, FD and VB1, respectively. Conclusion: The best cutoff value varies according to bone region of analysis and it is not possible to establish one value for the whole body.
Resumo:
The objective of this paper is to analyse the existente or not of a wage curve in Colombia, paying special attention to the differences between formal and informal workers, an issue that has been systematically ignored in the wage curve literature. The obtained results using microdata from the Colombian Continuous Household Survey (CHS) between 2002 and 2006 show the existence of a wage curve with a negative slope for the Colombian economy. Using information on metropolitan areas, the estimates of the elasticity of individual wages to local unemployment rates was -0.07, a value that is very close to those obtained for other countries. However, the disaggregation of statistical information for formal and informal workers has shown significant differences among both groups of workers. In particular, for the less protected groups of the labour market, informal workers (both men and women), a high negatively sloped wage curve was found. This result is consistent with the conclusions from efficiency wage theoretical models and should be taken into account when analysing the functioning of regional labour markets in developing countries.
Resumo:
We present new analytical tools able to predict the averaged behavior of fronts spreading through self-similar spatial systems starting from reaction-diffusion equations. The averaged speed for these fronts is predicted and compared with the predictions from a more general equation (proposed in a previous work of ours) and simulations. We focus here on two fractals, the Sierpinski gasket (SG) and the Koch curve (KC), for two reasons, i.e. i) they are widely known structures and ii) they are deterministic fractals, so the analytical study of them turns out to be more intuitive. These structures, despite their simplicity, let us observe several characteristics of fractal fronts. Finally, we discuss the usefulness and limitations of our approa
Resumo:
This PhD thesis in Mathematics belongs to the field of Geometric Function Theory. The thesis consists of four original papers. The topic studied deals with quasiconformal mappings and their distortion theory in Euclidean n-dimensional spaces. This theory has its roots in the pioneering papers of F. W. Gehring and J. Väisälä published in the early 1960’s and it has been studied by many mathematicians thereafter. In the first paper we refine the known bounds for the so-called Mori constant and also estimate the distortion in the hyperbolic metric. The second paper deals with radial functions which are simple examples of quasiconformal mappings. These radial functions lead us to the study of the so-called p-angular distance which has been studied recently e.g. by L. Maligranda and S. Dragomir. In the third paper we study a class of functions of a real variable studied by P. Lindqvist in an influential paper. This leads one to study parametrized analogues of classical trigonometric and hyperbolic functions which for the parameter value p = 2 coincide with the classical functions. Gaussian hypergeometric functions have an important role in the study of these special functions. Several new inequalities and identities involving p-analogues of these functions are also given. In the fourth paper we study the generalized complete elliptic integrals, modular functions and some related functions. We find the upper and lower bounds of these functions, and those bounds are given in a simple form. This theory has a long history which goes back two centuries and includes names such as A. M. Legendre, C. Jacobi, C. F. Gauss. Modular functions also occur in the study of quasiconformal mappings. Conformal invariants, such as the modulus of a curve family, are often applied in quasiconformal mapping theory. The invariants can be sometimes expressed in terms of special conformal mappings. This fact explains why special functions often occur in this theory.
Resumo:
Understanding hydrosedimental behavior of a watershed is essential for properly managing and using its hydric resources. The objective of this study was to verify the feasibility of the alternative procedure for the indirect determination of the sediment key curve using a turbidimeter. The research was carried out on the São Francisco Falso River, which is situated in the west of the state of Paraná on the left bank of ITAIPU reservoir. The direct method was applied using a DH-48 sediment suspended sampler. The indirect method consisted of the use of a linigraph and a turbidimeter. Based on the results obtained, it was concluded that the indirect method using a turbidimeter showed to be fully feasible, since it gave a power function-type mathematical model equal of the direct method. Furthermore, the average suspended sediment discharge into the São Francisco Falso River during the 2006/2007 harvest was calculated at 7.26 metric t day-1.
Resumo:
Greenhouse studies were conducted in 2008-2009 with the objective of adjusting dose-response curves of the main soil-applied herbicides currently used in cotton for the control of Amaranthus viridis, A. hybridus, A. spinosus, A. lividus, as well as comparing susceptibility among different species, using the identity test models. Thirty six individual experiments were simultaneously carried out in greenhouse, in a sandy clay loam soil (21% clay, 2.36% OM) combining increasing doses of the herbicides alachlor, clomazone, diuron, oxyfluorfen, pendimethalin, prometryn, S-metolachlor, and trifluralin applied to each species. Dose-response curves were adjusted for visual weed control at 28 days after herbicide application and doses required for 80% (C80) and 95% (C95) control were calculated. All herbicides, except clomazone and trifluralin, provided efficient control of most Amaranthus species, but substantial differences in susceptibility to herbicides were found. In general, A. lividus was the least sensitive species, whereas A. spinosus demonstrated the highest sensitivity to herbicides. Alachlor, diuron, oxyfluorfen, pendimethalin, S-metolachlor, and prometryn are efficient alternatives to control Amaranthus spp. in a range of doses that are currently lower than those recommended to cotton.
Resumo:
We analyzed the flow-volume curves of 50 patients with complaints of snoring and daytime sleepiness in treatment at the Pneumology Unit of the University Hospital of Brasília. The total group was divided into snorers without obstructive sleep apnea (OSA) (N = 19) and snorers with OSA (N = 31); the patients with OSA were subdivided into two groups according to the apnea/hypopnea index (AHI): AHI<20/h (N = 14) and AHI>20/h (N = 17). The control group (N = 10) consisted of nonsmoking subjects without complaints of snoring, daytime sleepiness or pulmonary diseases. The population studied (control and patients) consisted of males of similar age, height and body mass index (BMI); spirometric data were also similar in the four groups. There was no significative difference in the ratio of forced expiratory and inspiratory flows (FEF50%/FIF50%) in any group: control, 0.89; snorers, 1.11; snorers with OSA (AHI<20/h), 1.42, and snorers with OSA (AHI>20/h), 1.64. The FIF at 50% of vital capacity (FIF50%) of snoring patients with or without OSA was lower than the FIF50% of the control group (P<0.05): snorers 4.30 l/s; snorers with OSA (AHI<20/h) 3.69 l/s; snorers with OSA (AHI>20/h) 3.17 l/s and control group 5.48 l/s. The FIF50% of patients with severe OSA (AHI>20/h) was lower than the FIF50% of snorers without OSA (P<0.05): 3.17 l/s and 4.30 l/s, respectively. We conclude that 1) the FEF50%/FIF50% ratio is not useful for predicting OSA, and 2) FIF50% is decreased in snoring patients with and without OSA, suggesting that these patients have increased upper airway resistance (UAR).
Resumo:
The reverse transcription-polymerase chain reaction (RT-PCR) is the most sensitive method used to evaluate gene expression. Although many advances have been made since quantitative RT-PCR was first described, few reports deal with the mathematical bases of this technique. The aim of the present study was to develop and standardize a competitive PCR method using standard-curves to quantify transcripts of the myogenic regulatory factors MyoD, Myf-5, Myogenin and MRF4 in chicken embryos. Competitor cDNA molecules were constructed for each gene under study using deletion primers, which were designed to maintain the anchorage sites for the primers used to amplify target cDNAs. Standard-curves were prepared by co-amplification of different amounts of target cDNA with a constant amount of competitor. The content of specific mRNAs in embryo cDNAs was determined after PCR with a known amount of competitor and comparison to standard-curves. Transcripts of the housekeeping ß-actin gene were measured to normalize the results. As predicted by the model, most of the standard-curves showed a slope close to 1, while intercepts varied depending on the relative efficiency of competitor amplification. The sensitivity of the RT-PCR method permitted the detection of as few as 60 MyoD/Myf-5 molecules per reaction but approximately 600 molecules of MRF4/Myogenin mRNAS were necessary to produce a measurable signal. A coefficient of variation of 6 to 19% was estimated for the different genes analyzed (6 to 9 repetitions). The competitive RT-PCR assay described here is sensitive, precise and allows quantification of up to 9 transcripts from a single cDNA sample.
Resumo:
During cardiopulmonary exercise testing (CPET), stroke volume can be indirectly assessed by O2 pulse profile. However, for a valid interpretation, the stability of this variable over time should be known. The objective was to analyze the stability of the O2 pulse curve relative to body mass in elite athletes. VO2, heart rate (HR), and relative O2 pulse were compared at every 10% of the running time in two maximal CPETs, from 2005 to 2010, of 49 soccer players. Maximal values of VO2 (63.4 ± 0.9 vs 63.5 ± 0.9 mL O2•kg-1•min-1), HR (190 ± 1 vs188 ± 1 bpm) and relative O2 pulse (32.9 ± 0.6 vs 32.6 ± 0.6 mL O2•beat-1•kg-1) were similar for the two CPETs (P > 0.05), while the final treadmill velocity increased from 18.5 ± 0.9 to 18.9 ± 1.0 km/h (P < 0.01). Relative O2 pulse increased linearly and similarly in both evaluations (r² = 0.64 and 0.63) up to 90% of the running time. Between 90 and 100% of the running time, the values were less stable, with up to 50% of the players showing a tendency to a plateau in the relative O2 pulse. In young healthy men in good to excellent aerobic condition, the morphology of the relative O2 pulse curve is consistent up to close to the peak effort for a CPET repeated within a 1-year period. No increase in relative O2pulse at peak effort could represent a physiologic stroke volume limitation in these athletes.
Resumo:
Biological dosimetry (biodosimetry) is based on the investigation of radiation-induced biological effects (biomarkers), mainly dicentric chromosomes, in order to correlate them with radiation dose. To interpret the dicentric score in terms of absorbed dose, a calibration curve is needed. Each curve should be constructed with respect to basic physical parameters, such as the type of ionizing radiation characterized by low or high linear energy transfer (LET) and dose rate. This study was designed to obtain dose calibration curves by scoring of dicentric chromosomes in peripheral blood lymphocytes irradiated in vitro with a 6 MV electron linear accelerator (Mevatron M, Siemens, USA). Two software programs, CABAS (Chromosomal Aberration Calculation Software) and Dose Estimate, were used to generate the curve. The two software programs are discussed; the results obtained were compared with each other and with other published low LET radiation curves. Both software programs resulted in identical linear and quadratic terms for the curve presented here, which was in good agreement with published curves for similar radiation quality and dose rates.
Resumo:
For the past 20 years, researchers have applied the Kalman filter to the modeling and forecasting the term structure of interest rates. Despite its impressive performance in in-sample fitting yield curves, little research has focused on the out-of-sample forecast of yield curves using the Kalman filter. The goal of this thesis is to develop a unified dynamic model based on Diebold and Li (2006) and Nelson and Siegel’s (1987) three-factor model, and estimate this dynamic model using the Kalman filter. We compare both in-sample and out-of-sample performance of our dynamic methods with various other models in the literature. We find that our dynamic model dominates existing models in medium- and long-horizon yield curve predictions. However, the dynamic model should be used with caution when forecasting short maturity yields
Resumo:
In this paper we propose exact likelihood-based mean-variance efficiency tests of the market portfolio in the context of Capital Asset Pricing Model (CAPM), allowing for a wide class of error distributions which include normality as a special case. These tests are developed in the frame-work of multivariate linear regressions (MLR). It is well known however that despite their simple statistical structure, standard asymptotically justified MLR-based tests are unreliable. In financial econometrics, exact tests have been proposed for a few specific hypotheses [Jobson and Korkie (Journal of Financial Economics, 1982), MacKinlay (Journal of Financial Economics, 1987), Gib-bons, Ross and Shanken (Econometrica, 1989), Zhou (Journal of Finance 1993)], most of which depend on normality. For the gaussian model, our tests correspond to Gibbons, Ross and Shanken’s mean-variance efficiency tests. In non-gaussian contexts, we reconsider mean-variance efficiency tests allowing for multivariate Student-t and gaussian mixture errors. Our framework allows to cast more evidence on whether the normality assumption is too restrictive when testing the CAPM. We also propose exact multivariate diagnostic checks (including tests for multivariate GARCH and mul-tivariate generalization of the well known variance ratio tests) and goodness of fit tests as well as a set estimate for the intervening nuisance parameters. Our results [over five-year subperiods] show the following: (i) multivariate normality is rejected in most subperiods, (ii) residual checks reveal no significant departures from the multivariate i.i.d. assumption, and (iii) mean-variance efficiency tests of the market portfolio is not rejected as frequently once it is allowed for the possibility of non-normal errors.
Resumo:
In this paper, we use identification-robust methods to assess the empirical adequacy of a New Keynesian Phillips Curve (NKPC) equation. We focus on the Gali and Gertler’s (1999) specification, on both U.S. and Canadian data. Two variants of the model are studied: one based on a rationalexpectations assumption, and a modification to the latter which consists in using survey data on inflation expectations. The results based on these two specifications exhibit sharp differences concerning: (i) identification difficulties, (ii) backward-looking behavior, and (ii) the frequency of price adjustments. Overall, we find that there is some support for the hybrid NKPC for the U.S., whereas the model is not suited to Canada. Our findings underscore the need for employing identificationrobust inference methods in the estimation of expectations-based dynamic macroeconomic relations.
Resumo:
La dernière décennie a connu un intérêt croissant pour les problèmes posés par les variables instrumentales faibles dans la littérature économétrique, c’est-à-dire les situations où les variables instrumentales sont faiblement corrélées avec la variable à instrumenter. En effet, il est bien connu que lorsque les instruments sont faibles, les distributions des statistiques de Student, de Wald, du ratio de vraisemblance et du multiplicateur de Lagrange ne sont plus standard et dépendent souvent de paramètres de nuisance. Plusieurs études empiriques portant notamment sur les modèles de rendements à l’éducation [Angrist et Krueger (1991, 1995), Angrist et al. (1999), Bound et al. (1995), Dufour et Taamouti (2007)] et d’évaluation des actifs financiers (C-CAPM) [Hansen et Singleton (1982,1983), Stock et Wright (2000)], où les variables instrumentales sont faiblement corrélées avec la variable à instrumenter, ont montré que l’utilisation de ces statistiques conduit souvent à des résultats peu fiables. Un remède à ce problème est l’utilisation de tests robustes à l’identification [Anderson et Rubin (1949), Moreira (2002), Kleibergen (2003), Dufour et Taamouti (2007)]. Cependant, il n’existe aucune littérature économétrique sur la qualité des procédures robustes à l’identification lorsque les instruments disponibles sont endogènes ou à la fois endogènes et faibles. Cela soulève la question de savoir ce qui arrive aux procédures d’inférence robustes à l’identification lorsque certaines variables instrumentales supposées exogènes ne le sont pas effectivement. Plus précisément, qu’arrive-t-il si une variable instrumentale invalide est ajoutée à un ensemble d’instruments valides? Ces procédures se comportent-elles différemment? Et si l’endogénéité des variables instrumentales pose des difficultés majeures à l’inférence statistique, peut-on proposer des procédures de tests qui sélectionnent les instruments lorsqu’ils sont à la fois forts et valides? Est-il possible de proposer les proédures de sélection d’instruments qui demeurent valides même en présence d’identification faible? Cette thèse se focalise sur les modèles structurels (modèles à équations simultanées) et apporte des réponses à ces questions à travers quatre essais. Le premier essai est publié dans Journal of Statistical Planning and Inference 138 (2008) 2649 – 2661. Dans cet essai, nous analysons les effets de l’endogénéité des instruments sur deux statistiques de test robustes à l’identification: la statistique d’Anderson et Rubin (AR, 1949) et la statistique de Kleibergen (K, 2003), avec ou sans instruments faibles. D’abord, lorsque le paramètre qui contrôle l’endogénéité des instruments est fixe (ne dépend pas de la taille de l’échantillon), nous montrons que toutes ces procédures sont en général convergentes contre la présence d’instruments invalides (c’est-à-dire détectent la présence d’instruments invalides) indépendamment de leur qualité (forts ou faibles). Nous décrivons aussi des cas où cette convergence peut ne pas tenir, mais la distribution asymptotique est modifiée d’une manière qui pourrait conduire à des distorsions de niveau même pour de grands échantillons. Ceci inclut, en particulier, les cas où l’estimateur des double moindres carrés demeure convergent, mais les tests sont asymptotiquement invalides. Ensuite, lorsque les instruments sont localement exogènes (c’est-à-dire le paramètre d’endogénéité converge vers zéro lorsque la taille de l’échantillon augmente), nous montrons que ces tests convergent vers des distributions chi-carré non centrées, que les instruments soient forts ou faibles. Nous caractérisons aussi les situations où le paramètre de non centralité est nul et la distribution asymptotique des statistiques demeure la même que dans le cas des instruments valides (malgré la présence des instruments invalides). Le deuxième essai étudie l’impact des instruments faibles sur les tests de spécification du type Durbin-Wu-Hausman (DWH) ainsi que le test de Revankar et Hartley (1973). Nous proposons une analyse en petit et grand échantillon de la distribution de ces tests sous l’hypothèse nulle (niveau) et l’alternative (puissance), incluant les cas où l’identification est déficiente ou faible (instruments faibles). Notre analyse en petit échantillon founit plusieurs perspectives ainsi que des extensions des précédentes procédures. En effet, la caractérisation de la distribution de ces statistiques en petit échantillon permet la construction des tests de Monte Carlo exacts pour l’exogénéité même avec les erreurs non Gaussiens. Nous montrons que ces tests sont typiquement robustes aux intruments faibles (le niveau est contrôlé). De plus, nous fournissons une caractérisation de la puissance des tests, qui exhibe clairement les facteurs qui déterminent la puissance. Nous montrons que les tests n’ont pas de puissance lorsque tous les instruments sont faibles [similaire à Guggenberger(2008)]. Cependant, la puissance existe tant qu’au moins un seul instruments est fort. La conclusion de Guggenberger (2008) concerne le cas où tous les instruments sont faibles (un cas d’intérêt mineur en pratique). Notre théorie asymptotique sous les hypothèses affaiblies confirme la théorie en échantillon fini. Par ailleurs, nous présentons une analyse de Monte Carlo indiquant que: (1) l’estimateur des moindres carrés ordinaires est plus efficace que celui des doubles moindres carrés lorsque les instruments sont faibles et l’endogenéité modérée [conclusion similaire à celle de Kiviet and Niemczyk (2007)]; (2) les estimateurs pré-test basés sur les tests d’exogenété ont une excellente performance par rapport aux doubles moindres carrés. Ceci suggère que la méthode des variables instrumentales ne devrait être appliquée que si l’on a la certitude d’avoir des instruments forts. Donc, les conclusions de Guggenberger (2008) sont mitigées et pourraient être trompeuses. Nous illustrons nos résultats théoriques à travers des expériences de simulation et deux applications empiriques: la relation entre le taux d’ouverture et la croissance économique et le problème bien connu du rendement à l’éducation. Le troisième essai étend le test d’exogénéité du type Wald proposé par Dufour (1987) aux cas où les erreurs de la régression ont une distribution non-normale. Nous proposons une nouvelle version du précédent test qui est valide même en présence d’erreurs non-Gaussiens. Contrairement aux procédures de test d’exogénéité usuelles (tests de Durbin-Wu-Hausman et de Rvankar- Hartley), le test de Wald permet de résoudre un problème courant dans les travaux empiriques qui consiste à tester l’exogénéité partielle d’un sous ensemble de variables. Nous proposons deux nouveaux estimateurs pré-test basés sur le test de Wald qui performent mieux (en terme d’erreur quadratique moyenne) que l’estimateur IV usuel lorsque les variables instrumentales sont faibles et l’endogénéité modérée. Nous montrons également que ce test peut servir de procédure de sélection de variables instrumentales. Nous illustrons les résultats théoriques par deux applications empiriques: le modèle bien connu d’équation du salaire [Angist et Krueger (1991, 1999)] et les rendements d’échelle [Nerlove (1963)]. Nos résultats suggèrent que l’éducation de la mère expliquerait le décrochage de son fils, que l’output est une variable endogène dans l’estimation du coût de la firme et que le prix du fuel en est un instrument valide pour l’output. Le quatrième essai résout deux problèmes très importants dans la littérature économétrique. D’abord, bien que le test de Wald initial ou étendu permette de construire les régions de confiance et de tester les restrictions linéaires sur les covariances, il suppose que les paramètres du modèle sont identifiés. Lorsque l’identification est faible (instruments faiblement corrélés avec la variable à instrumenter), ce test n’est en général plus valide. Cet essai développe une procédure d’inférence robuste à l’identification (instruments faibles) qui permet de construire des régions de confiance pour la matrices de covariances entre les erreurs de la régression et les variables explicatives (possiblement endogènes). Nous fournissons les expressions analytiques des régions de confiance et caractérisons les conditions nécessaires et suffisantes sous lesquelles ils sont bornés. La procédure proposée demeure valide même pour de petits échantillons et elle est aussi asymptotiquement robuste à l’hétéroscédasticité et l’autocorrélation des erreurs. Ensuite, les résultats sont utilisés pour développer les tests d’exogénéité partielle robustes à l’identification. Les simulations Monte Carlo indiquent que ces tests contrôlent le niveau et ont de la puissance même si les instruments sont faibles. Ceci nous permet de proposer une procédure valide de sélection de variables instrumentales même s’il y a un problème d’identification. La procédure de sélection des instruments est basée sur deux nouveaux estimateurs pré-test qui combinent l’estimateur IV usuel et les estimateurs IV partiels. Nos simulations montrent que: (1) tout comme l’estimateur des moindres carrés ordinaires, les estimateurs IV partiels sont plus efficaces que l’estimateur IV usuel lorsque les instruments sont faibles et l’endogénéité modérée; (2) les estimateurs pré-test ont globalement une excellente performance comparés à l’estimateur IV usuel. Nous illustrons nos résultats théoriques par deux applications empiriques: la relation entre le taux d’ouverture et la croissance économique et le modèle de rendements à l’éducation. Dans la première application, les études antérieures ont conclu que les instruments n’étaient pas trop faibles [Dufour et Taamouti (2007)] alors qu’ils le sont fortement dans la seconde [Bound (1995), Doko et Dufour (2009)]. Conformément à nos résultats théoriques, nous trouvons les régions de confiance non bornées pour la covariance dans le cas où les instruments sont assez faibles.