991 resultados para Quantitative micrographic parameters
Resumo:
The species Sitobion graminis Takahashi, 1950 (Hemiptera, Aphididae) was first detected in Brazil in 1998, in Curitiba, Paraná state, associated with the grass species Erianthus sp., Calamagrotis sp. and Paspalum urvilei. Both the field-collected and laboratory-reared specimens presented a noticeable intrapopulational variation in body and appendix length and in dorso-abdominal sclerotization. This species has been recorded in Malaysia, New Guinea, India, Philippines and Africa, where it colonizes several species of Poaceae. S. graminis differs from other Sitobion species from Brazil associated with grasses, as it presents black cauda and siphunculi and exhibits a constriction in the base of the last rostral segment. Biological data were obtained in the laboratory by rearing newborn nymphs on the inflorescence of the host plants. They passed through four nymphal instars. The mean duration of the nymphal stage was of 11.4 days, with a mortality ratio of 36.5%. The mean pre-larviposition period was of 1.8 days; mean longevity of the females was 25.2 days; and mean fecundity was 18.7 nymphs/female, ranging from 2 to 41 nymphs/female.
Resumo:
In this paper we propose a simple and general model for computing the Ramsey optimal inflation tax, which includes several models from the previous literature as special cases. We show that it cannot be claimed that the Friedman rule is always optimal (or always non--optimal) on theoretical grounds. The Friedman rule is optimal or not, depending on conditions related to the shape of various relevant functions. One contribution of this paper is to relate these conditions to {\it measurable} variables such as the interest rate or the consumption elasticity of money demand. We find that it tends to be optimal to tax money when there are economies of scale in the demand for money (the scale elasticity is smaller than one) and/or when money is required for the payment of consumption or wage taxes. We find that it tends to be optimal to tax money more heavily when the interest elasticity of money demand is small. We present empirical evidence on the parameters that determine the optimal inflation tax. Calibrating the model to a variety of empirical studies yields a optimal nominal interest rate of less than 1\%/year, although that finding is sensitive to the calibration.
Resumo:
In order to characterize inverse agonism at alpha1B-adrenoceptors, we have compared the concentration-response relationships of several quinazoline and non-quinazoline alpha1-adrenoceptor antagonists at cloned hamster wild-type (WT) alpha1B-adrenoceptors and a constitutively active mutant (CAM) thereof upon stable expression in Rat-1 fibroblasts. Receptor activation or inhibition thereof was assessed as [3H]inositol phosphate (IP) accumulation. Quinazoline (alfuzosin, doxazosin, prazosin, terazosin) and non-quinazoline alpha1-adrenoceptor antagonists (BE 2254, SB 216,469, tamsulosin) concentration-dependently inhibited phenylephrine-stimulated IP formation at both WT and CAM with Ki values similar to those previously found in radioligand binding studies. At CAM in the absence of phenylephrine, the quinazolines produced concentration-dependent inhibition of basal IP formation; the maximum inhibition was approximately 55%, and the corresponding EC50 values were slightly smaller than the Ki values. In contrast, BE 2254 produced much less inhibition of basal IP formation, SB 216,469 was close to being a neutral antagonist, and tamsulosin even weakly stimulated IP formation. The inhibitory effects of the quinazolines and BE 2254 as well as the stimulatory effect of tamsulosin were equally blocked by SB 216,469 at CAM. At WT in the absence of phenylephrine, tamsulosin did not cause significant stimulation and none of the other compounds caused significant inhibition of basal IP formation. We conclude that alpha1-adrenoceptor antagonsits with a quinazoline structure exhibit greater efficacy as inverse agonists than those without.
Value of sTREM-1, procalcitonin and CRP as laboratory parameters for postmortem diagnosis of sepsis.
Resumo:
OBJECTIVES: Triggering receptor expressed on myeloid cells-1 (TREM-1) was reported to be up-regulated in various inflammatory diseases as well as in bacterial sepsis. Increased cell-surface TREM-1 expression was also shown to result in marked plasma elevation of the soluble form of this molecule (sTREM-1) in patients with bacterial infections. In this study, we investigated sTREM-1, procalcitonin and C-reactive protein in postmortem serum in a series of sepsis-related fatalities and control individuals who underwent medico-legal investigations. sTREM-1 was also measured in pericardial fluid and urine. METHODS: Two study groups were prospectively formed, a sepsis-related fatalities group and a control group. The sepsis-related fatalities group consisted of sixteen forensic autopsy cases. Eight of these had a documented clinical diagnosis of sepsis in vivo. The control group consisted of sixteen forensic autopsy cases with various causes of death. RESULTS: Postmortem serum sTREM-1 concentrations were higher in the sepsis group with a mean value of 173.6 pg/ml in septic cases and 79.2 pg/ml in control individuals. The cutoff value of 90 pg/ml provided the best sensitivity and specificity. Pericardial fluid sTREM-1 values were higher in the septic group, with a mean value of 296.7 pg/ml in septic cases and 100.9 pg/ml in control individuals. The cutoff value of 135 pg/ml provided the best sensitivity and specificity. Mean urine sTREM-1 concentration was 102.9 pg/ml in septic cases and 89.3 pg/ml in control individuals. CONCLUSIONS: Postmortem serum sTREM-1, individually considered, did not provide better sensitivity and specificity than procalcitonin in detecting sepsis. However, simultaneous assessment of procalcitonin and sTREM-1 in postmortem serum can be of help in clarifying contradictory postmortem findings. sTREM-1 determination in pericardial fluid can be an alternative to postmortem serum in those situations in which biochemical analyses are required and blood collected during autopsy proves insufficient.
Resumo:
This paper ia an attempt to clarify the relationship between fractionalization,polarization and conflict. The literature on the measurement of ethnic diversityhas taken as given that the proper measure for heterogeneity can be calculatedby using the fractionalization index. This index is widely used in industrialeconomics and, for empirical purposes, the ethnolinguistic fragmentation isready available for regression exercises. Nevertheless the adequacy of asynthetic index of hetergeneity depends on the intrinsic characteristicsof the heterogeneous dimension to be measured. In the case of ethnicdiversity there is a very strong conflictive dimension. For this reasonwe argue that the measure of heterogeneity should be one of the class ofpolarization measures. In fact the intuition of the relationship betweenconflict and fractionalization do not hold for more than two groups. Incontrast with the usual problem of polarization indices, which are ofdifficult empirical implementation without making some arbitrary choiceof parameters, we show that the RQ index, proposed by Reynal-Querol (2002),is the only discrete polarization measure that satisfies the basic propertiesof polarization. Additionally we present a derivation of the RQ index froma simple rent seeking model. In the empirical section we show that whileethnic polarization has a positive effect on civil wars and, indirectly ongrowth, this effect is not present when we use ethnic fractionalization.
Resumo:
The purpose of this study was to investigate the impact of navigator timing on image quality in navigator-gated and real-time motion-corrected, free-breathing, three-dimensional (3D) coronary MR angiography (MRA) with submillimeter spatial image resolution. Both phantom and in vivo investigations were performed. 3D coronary MRA with real-time navigator technology was applied using variable navigator time delays (time delay between the navigator and imaging sequences) and varying spatial resolutions. Quantitative objective and subjective image quality parameters were assessed. For high-resolution imaging, reduced image quality was found as a function of increasing navigator time delay. Lower spatial resolution coronary MRA showed only minor sensitivity to navigator timing. These findings were consistent among volunteers and phantom experiments. In conclusion, for submillimeter navigator-gated and real-time motion-corrected 3D coronary MRA, shortening the time delay between the navigator and the imaging portion of the sequence becomes increasingly important for improved spatial resolution.
Resumo:
In this article we propose using small area estimators to improve the estimatesof both the small and large area parameters. When the objective is to estimateparameters at both levels accurately, optimality is achieved by a mixed sampledesign of fixed and proportional allocations. In the mixed sample design, oncea sample size has been determined, one fraction of it is distributedproportionally among the different small areas while the rest is evenlydistributed among them. We use Monte Carlo simulations to assess theperformance of the direct estimator and two composite covariant-freesmall area estimators, for different sample sizes and different sampledistributions. Performance is measured in terms of Mean Squared Errors(MSE) of both small and large area parameters. It is found that the adoptionof small area composite estimators open the possibility of 1) reducingsample size when precision is given, or 2) improving precision for a givensample size.
Resumo:
In this paper we explore the effects of the minimum pension program on welfare andretirement in Spain. This is done with a stylized life-cycle model which provides a convenient analytical characterization of optimal behavior. We use data from the Spanish Social Security to estimate the behavioral parameters of the model and then simulate the changes induced by the minimum pension in aggregate retirement patterns. The impact is substantial: there is threefold increase in retirement at 60 (the age of first entitlement) with respect to the economy without minimum pensions, and total early retirement (before or at 60) is almost 50% larger.
Resumo:
We represent interval ordered homothetic preferences with a quantitative homothetic utility function and a multiplicative bias. When preferences are weakly ordered (i.e. when indifference is transitive), such a bias equals 1. When indifference is intransitive, the biasing factor is a positive function smaller than 1 and measures a threshold of indifference. We show that the bias is constant if and only if preferences are semiordered, and we identify conditions ensuring a linear utility function. We illustrate our approach with indifference sets on a two dimensional commodity space.
Resumo:
This paper establishes a general framework for metric scaling of any distance measure between individuals based on a rectangular individuals-by-variables data matrix. The method allows visualization of both individuals and variables as well as preserving all the good properties of principal axis methods such as principal components and correspondence analysis, based on the singular-value decomposition, including the decomposition of variance into components along principal axes which provide the numerical diagnostics known as contributions. The idea is inspired from the chi-square distance in correspondence analysis which weights each coordinate by an amount calculated from the margins of the data table. In weighted metric multidimensional scaling (WMDS) we allow these weights to be unknown parameters which are estimated from the data to maximize the fit to the original distances. Once this extra weight-estimation step is accomplished, the procedure follows the classical path in decomposing a matrix and displaying its rows and columns in biplots.
Resumo:
Many multivariate methods that are apparently distinct can be linked by introducing oneor more parameters in their definition. Methods that can be linked in this way arecorrespondence analysis, unweighted or weighted logratio analysis (the latter alsoknown as "spectral mapping"), nonsymmetric correspondence analysis, principalcomponent analysis (with and without logarithmic transformation of the data) andmultidimensional scaling. In this presentation I will show how several of thesemethods, which are frequently used in compositional data analysis, may be linkedthrough parametrizations such as power transformations, linear transformations andconvex linear combinations. Since the methods of interest here all lead to visual mapsof data, a "movie" can be made where where the linking parameter is allowed to vary insmall steps: the results are recalculated "frame by frame" and one can see the smoothchange from one method to another. Several of these "movies" will be shown, giving adeeper insight into the similarities and differences between these methods.
Resumo:
A family of scaling corrections aimed to improve the chi-square approximation of goodness-of-fit test statistics in small samples, large models, and nonnormal data was proposed in Satorra and Bentler (1994). For structural equations models, Satorra-Bentler's (SB) scaling corrections are available in standard computer software. Often, however, the interest is not on the overall fit of a model, but on a test of the restrictions that a null model say ${\cal M}_0$ implies on a less restricted one ${\cal M}_1$. If $T_0$ and $T_1$ denote the goodness-of-fit test statistics associated to ${\cal M}_0$ and ${\cal M}_1$, respectively, then typically the difference $T_d = T_0 - T_1$ is used as a chi-square test statistic with degrees of freedom equal to the difference on the number of independent parameters estimated under the models ${\cal M}_0$ and ${\cal M}_1$. As in the case of the goodness-of-fit test, it is of interest to scale the statistic $T_d$ in order to improve its chi-square approximation in realistic, i.e., nonasymptotic and nonnormal, applications. In a recent paper, Satorra (1999) shows that the difference between two Satorra-Bentler scaled test statistics for overall model fit does not yield the correct SB scaled difference test statistic. Satorra developed an expression that permits scaling the difference test statistic, but his formula has some practical limitations, since it requires heavy computations that are notavailable in standard computer software. The purpose of the present paper is to provide an easy way to compute the scaled difference chi-square statistic from the scaled goodness-of-fit test statistics of models ${\cal M}_0$ and ${\cal M}_1$. A Monte Carlo study is provided to illustrate the performance of the competing statistics.
Resumo:
A class of composite estimators of small area quantities that exploit spatial (distancerelated)similarity is derived. It is based on a distribution-free model for the areas, but theestimators are aimed to have optimal design-based properties. Composition is applied alsoto estimate some of the global parameters on which the small area estimators depend.It is shown that the commonly adopted assumption of random effects is not necessaryfor exploiting the similarity of the districts (borrowing strength across the districts). Themethods are applied in the estimation of the mean household sizes and the proportions ofsingle-member households in the counties (comarcas) of Catalonia. The simplest version ofthe estimators is more efficient than the established alternatives, even though the extentof spatial similarity is quite modest.
Resumo:
To provide a quantitative support to the handwriting evidence evaluation, a new method was developed through the computation of a likelihood ratio based on a Bayesian approach. In the present paper, the methodology is briefly described and applied to data collected within a simulated case of a threatening letter. Fourier descriptors are used to characterise the shape of loops of handwritten characters "a" of the true writer of the threatening letter, and: 1) with reference characters "a" of the true writer of the threatening letter, and then 2) with characters "a" of a writer who did not write the threatening letter. The findings support that the probabilistic methodology correctly supports either the hypothesis of authorship or the alternative hypothesis. Further developments will enable the handwriting examiner to use this methodology as a helpful assistance to assess the strength of evidence in handwriting casework.
Resumo:
Four general equilibrium search models are compared quantitatively. Thebaseline framework is a calibrated macroeconomic model of the US economydesigned for a welfare analysis of unemployment insurance policy. Theother models make three simple and natural specification changes,regarding tax incidence, monopsony power in wage determination, and therelevant threat point. These specification changes have a major impacton the equilibrium and on the welfare implications of unemploymentinsurance, partly because search externalities magnify the effects ofwage changes. The optimal level of unemployment insurance dependsstrongly on whether raising benefits has a larger impact on searcheffort or on hiring expenditure.