977 resultados para Interval generalized set
Resumo:
Principal curves have been defined Hastie and Stuetzle (JASA, 1989) assmooth curves passing through the middle of a multidimensional dataset. They are nonlinear generalizations of the first principalcomponent, a characterization of which is the basis for the principalcurves definition.In this paper we propose an alternative approach based on a differentproperty of principal components. Consider a point in the space wherea multivariate normal is defined and, for each hyperplane containingthat point, compute the total variance of the normal distributionconditioned to belong to that hyperplane. Choose now the hyperplaneminimizing this conditional total variance and look for thecorresponding conditional mean. The first principal component of theoriginal distribution passes by this conditional mean and it isorthogonal to that hyperplane. This property is easily generalized todata sets with nonlinear structure. Repeating the search from differentstarting points, many points analogous to conditional means are found.We call them principal oriented points. When a one-dimensional curveruns the set of these special points it is called principal curve oforiented points. Successive principal curves are recursively definedfrom a generalization of the total variance.
Resumo:
We represent interval ordered homothetic preferences with a quantitative homothetic utility function and a multiplicative bias. When preferences are weakly ordered (i.e. when indifference is transitive), such a bias equals 1. When indifference is intransitive, the biasing factor is a positive function smaller than 1 and measures a threshold of indifference. We show that the bias is constant if and only if preferences are semiordered, and we identify conditions ensuring a linear utility function. We illustrate our approach with indifference sets on a two dimensional commodity space.
Resumo:
The demands of representative design, as formulated by Egon Brunswik (1956), set a high methodological standard. Both experimental participants and the situations with which they are faced should be representative of the populations to which researchers claim to generalize results. Failure to observe the latter has led to notable experimental failures in psychology from which economics could learn. It also raises questions about the meaning of testing economic theories in abstract environments. Logically, abstract tests can only be generalized to abstract realities and these may or may not have anything to do with the empirical realities experienced by economic actors.
Resumo:
In this paper I explore the issue of nonlinearity (both in the datageneration process and in the functional form that establishes therelationship between the parameters and the data) regarding the poorperformance of the Generalized Method of Moments (GMM) in small samples.To this purpose I build a sequence of models starting with a simple linearmodel and enlarging it progressively until I approximate a standard (nonlinear)neoclassical growth model. I then use simulation techniques to find the smallsample distribution of the GMM estimators in each of the models.
Resumo:
It is shown that preferences can be constructed from observed choice behavior in a way that is robust to indifferent selection (i.e., the agent is indifferent between two alternatives but, nevertheless, is only observed selecting one of them). More precisely, a suggestion by Savage (1954) to reveal indifferent selection by considering small monetary perturbations of alternatives is formalized and generalized to a purely topological framework: references over an arbitrary topological space can be uniquely derived from observed behavior under the assumptions that they are continuous and nonsatiated and that a strictly preferred alternative is always chosen, and indifferent selection is then characterized by discontinuity in choice behavior. Two particular cases are then analyzed: monotonic preferences over a partially ordered set, and preferences representable by a continuous pseudo-utility function.
Resumo:
This paper presents a general equilibrium model of money demand wherethe velocity of money changes in response to endogenous fluctuations in the interest rate. The parameter space can be divided into two subsets: one where velocity is constant and equal to one as in cash-in-advance models, and another one where velocity fluctuates as in Baumol (1952). Despite its simplicity, in terms of paramaters to calibrate, the model performs surprisingly well. In particular, it approximates the variability of money velocity observed in the U.S. for the post-war period. The model is then used to analyze the welfare costs of inflation under uncertainty. This application calculates the errors derived from computing the costs of inflation with deterministic models. It turns out that the size of this difference is small, at least for the levels of uncertainty estimated for the U.S. economy.
Resumo:
ABSTRACT. Chrysomya albiceps (Wiedemann) and Hemilucilia segmentaria (Fabricius) (Diptera, Calliphoridae) used to estimate the postmortem interval in a forensic case in Minas Gerais, Brazil. The corpse of a man was found in a Brazilian highland savanna (cerrado) in the state of Minas Gerais. Fly larvae were collected at the crime scene and arrived at the laboratory three days afterwards. From the eight pre-pupae, seven adults of Chrysomya albiceps (Wiedemann, 1819) emerged and, from the two larvae, two adults of Hemilucilia segmentaria (Fabricius, 1805) were obtained. As necrophagous insects use corpses as a feeding resource, their development rate can be used as a tool to estimate the postmortem interval. The post-embryonary development stage of the immature collected on the body was estimated as the difference between the total development time and the time required for them to become adults in the lab. The estimated age of the maggots from both species and the minimum postmortem interval were four days. This is the first time that H. segmentaria is used to estimate the postmortem interval in a forensic case.
Resumo:
Human immunodeficiency virus type 1 (HIV-1) isolates from 20 chronically infected patients who participated in a structured treatment interruption (STI) trial were studied to determine whether viral fitness influences reestablishment of viremia. Viruses derived from individuals who spontaneously controlled viremia had significantly lower in vitro replication capacities than viruses derived from individuals that did not control viremia after interruption of antiretroviral therapy (ART), and replication capacities correlated with pre-ART and post-STI viral set points. Of note, no clinically relevant improvement of viral loads upon STI occurred. Virus isolates from controlling and noncontrolling patients were indistinguishable in terms of coreceptor usage, genetic subtype, and sensitivity to neutralizing antibodies. In contrast, viruses from controlling patients exhibited increased sensitivity to inhibition by chemokines. Sensitivity to inhibition by RANTES correlated strongly with slower replication kinetics of the virus isolates, suggesting a marked dependency of these virus isolates on high coreceptor densities on the target cells. In summary, our data indicate that viral fitness is a driving factor in determining the magnitude of viral rebound and viral set point in chronic HIV-1 infection, and thus fitness should be considered as a parameter influencing the outcome of therapeutic intervention in chronic infection.
Resumo:
The set covering problem is an NP-hard combinatorial optimization problemthat arises in applications ranging from crew scheduling in airlines todriver scheduling in public mass transport. In this paper we analyze searchspace characteristics of a widely used set of benchmark instances throughan analysis of the fitness-distance correlation. This analysis shows thatthere exist several classes of set covering instances that have a largelydifferent behavior. For instances with high fitness distance correlation,we propose new ways of generating core problems and analyze the performanceof algorithms exploiting these core problems.
Resumo:
It is shown how correspondence analysis may be applied to a subset of response categories from a questionnaire survey, for example the subset of undecided responses or the subset of responses for a particular category. The idea is to maintain the original relative frequencies of the categories and not re-express them relative to totals within the subset, as would normally be done in a regular correspondence analysis of the subset. Furthermore, the masses and chi-square metric assigned to the data subset are the same as those in the correspondence analysis of the whole data set. This variant of the method, called Subset Correspondence Analysis, is illustrated on data from the ISSP survey on Family and Changing Gender Roles.
Resumo:
The influence of the basis set size and the correlation energy in the static electrical properties of the CO molecule is assessed. In particular, we have studied both the nuclear relaxation and the vibrational contributions to the static molecular electrical properties, the vibrational Stark effect (VSE) and the vibrational intensity effect (VIE). From a mathematical point of view, when a static and uniform electric field is applied to a molecule, the energy of this system can be expressed in terms of a double power series with respect to the bond length and to the field strength. From the power series expansion of the potential energy, field-dependent expressions for the equilibrium geometry, for the potential energy and for the force constant are obtained. The nuclear relaxation and vibrational contributions to the molecular electrical properties are analyzed in terms of the derivatives of the electronic molecular properties. In general, the results presented show that accurate inclusion of the correlation energy and large basis sets are needed to calculate the molecular electrical properties and their derivatives with respect to either nuclear displacements or/and field strength. With respect to experimental data, the calculated power series coefficients are overestimated by the SCF, CISD, and QCISD methods. On the contrary, perturbation methods (MP2 and MP4) tend to underestimate them. In average and using the 6-311 + G(3df) basis set and for the CO molecule, the nuclear relaxation and the vibrational contributions to the molecular electrical properties amount to 11.7%, 3.3%, and 69.7% of the purely electronic μ, α, and β values, respectively
Resumo:
In earlier work, the present authors have shown that hardness profiles are less dependent on the level of calculation than energy profiles for potential energy surfaces (PESs) having pathological behaviors. At variance with energy profiles, hardness profiles always show the correct number of stationary points. This characteristic has been used to indicate the existence of spurious stationary points on the PESs. In the present work, we apply this methodology to the hydrogen fluoride dimer, a classical difficult case for the density functional theory methods
Resumo:
The most widely used formula for estimating glomerular filtration rate (eGFR) in children is the Schwartz formula. It was revised in 2009 using iohexol clearances with measured GFR (mGFR) ranging between 15 and 75 ml/min × 1.73 m(2). Here we assessed the accuracy of the Schwartz formula using the inulin clearance (iGFR) method to evaluate its accuracy for children with less renal impairment comparing 551 iGFRs of 392 children with their Schwartz eGFRs. Serum creatinine was measured using the compensated Jaffe method. In order to find the best relationship between iGFR and eGFR, a linear quadratic regression model was fitted and a more accurate formula was derived. This quadratic formula was: 0.68 × (Height (cm)/serum creatinine (mg/dl))-0.0008 × (height (cm)/serum creatinine (mg/dl))(2)+0.48 × age (years)-(21.53 in males or 25.68 in females). This formula was validated using a split-half cross-validation technique and also externally validated with a new cohort of 127 children. Results show that the Schwartz formula is accurate until a height (Ht)/serum creatinine value of 251, corresponding to an iGFR of 103 ml/min × 1.73 m(2), but significantly unreliable for higher values. For an accuracy of 20 percent, the quadratic formula was significantly better than the Schwartz formula for all patients and for patients with a Ht/serum creatinine of 251 or greater. Thus, the new quadratic formula could replace the revised Schwartz formula, which is accurate for children with moderate renal failure but not for those with less renal impairment or hyperfiltration.