847 resultados para Approach to written
Resumo:
We propose a method to evaluate cyclical models which does not require knowledge of the DGP and the exact empirical specification of the aggregate decision rules. We derive robust restrictions in a class of models; use some to identify structural shocks and others to evaluate the model or contrast sub-models. The approach has good size and excellent power properties, even in small samples. We show how to examine the validity of a class of models, sort out the relevance of certain frictions, evaluate the importance of an added feature, and indirectly estimate structural parameters.
Resumo:
This study aimed at analyzing nipple trauma resulted from breastfeeding based on dermatological approach. Two integrative reviews of literature were conducted, the first related to definitions, classification and evaluation methods of nipple trauma and another about validation studies related to this theme. In the first part were included 20 studies and only one third defined nipple trauma, more than half did not defined the nipple’s injuries reported, and each author showed a particular way to assess the injuries, without consensus. In the second integrative review, no validation study or algorithm related to nipple trauma resulted from breastfeeding was found. This fact demonstrated that the nipple’s injuries mentioned in the first review did not go through validation studies, justifying the lack of consensus identified as far as definition, classification and assessment methods of nipple trauma.
Resumo:
Radioimmunodetection of tumours with monoclonal antibodies is becoming an established procedure. Positron emission tomography (PET) shows better resolution than normal gamma camera single photon emission tomography and can provide more precise quantitative data. Thus, in the present study, these powerful methods have been combined to perform radioimmuno PET (RI-PET). Monoclonal antibodies directed against carcinoembryonic antigen (CEA) an IgG, its F(ab')2 and a mouse-human chimeric IgG derived from it were labelled with 124I, a positron-emitting radionuclide with a convenient physical half-life of four days. Mice, xenografted with a CEA-producing human colon carcinoma, were injected with the 124I-MAb and the tumours were visualized using PET. The concentrations of 124I in tumour and normal tissue were determined by both PET and direct radioactivity counting of the dissected animals, with very good agreement. To allow PET quantification, a procedure was established to account for the presence of radioactivity during the absorption correction measurement (transmission scan). Comparison of PET and tissue counting indicates that this novel combination of radioimmunolocalization and PET (RI-PET) will provide, in addition to more precise diagnosis, more accurate radiation dosimetry for radioimmunotherapy.
Resumo:
OBJECTIVETo assess the quality of prenatal care in mothers with premature and term births and identify maternal and gestational factors associated with inadequate prenatal care.METHODCross-sectional study collecting data with the pregnant card, hospital records and interviews with mothers living in Maringa-PR. Data were collected from 576 mothers and their born alive infants who were attended in the public service from October 2013 to February 2014, using three different evaluation criteria. The association of prenatal care quality with prematurity was performed by univariate analysis and occurred only at Kessner criteria (CI=1.79;8.02).RESULTSThe indicators that contributed most to the inadequacy of prenatal care were tests of hemoglobin, urine, and fetal presentation. After logistic regression analysis, maternal and gestational variables associated to inadequate prenatal care were combined prenatal (CI=2.93;11.09), non-white skin color (CI=1.11;2.51); unplanned pregnancy (CI=1.34;3.17) and multiparity (CI=1.17;4.03).CONCLUSIONPrenatal care must follow the minimum recommended protocols, more attention is required to black and brown women, multiparous and with unplanned pregnancies to prevent preterm birth and maternal and child morbimortality.
Resumo:
The Aitchison vector space structure for the simplex is generalized to a Hilbert space structure A2(P) for distributions and likelihoods on arbitrary spaces. Centralnotations of statistics, such as Information or Likelihood, can be identified in the algebraical structure of A2(P) and their corresponding notions in compositional data analysis, such as Aitchison distance or centered log ratio transform.In this way very elaborated aspects of mathematical statistics can be understoodeasily in the light of a simple vector space structure and of compositional data analysis. E.g. combination of statistical information such as Bayesian updating,combination of likelihood and robust M-estimation functions are simple additions/perturbations in A2(Pprior). Weighting observations corresponds to a weightedaddition of the corresponding evidence.Likelihood based statistics for general exponential families turns out to have aparticularly easy interpretation in terms of A2(P). Regular exponential families formfinite dimensional linear subspaces of A2(P) and they correspond to finite dimensionalsubspaces formed by their posterior in the dual information space A2(Pprior).The Aitchison norm can identified with mean Fisher information. The closing constant itself is identified with a generalization of the cummulant function and shown to be Kullback Leiblers directed information. Fisher information is the local geometry of the manifold induced by the A2(P) derivative of the Kullback Leibler information and the space A2(P) can therefore be seen as the tangential geometry of statistical inference at the distribution P.The discussion of A2(P) valued random variables, such as estimation functionsor likelihoods, give a further interpretation of Fisher information as the expected squared norm of evidence and a scale free understanding of unbiased reasoning
Resumo:
The achievable region approach seeks solutions to stochastic optimisation problems by: (i) characterising the space of all possible performances(the achievable region) of the system of interest, and (ii) optimisingthe overall system-wide performance objective over this space. This isradically different from conventional formulations based on dynamicprogramming. The approach is explained with reference to a simpletwo-class queueing system. Powerful new methodologies due to the authorsand co-workers are deployed to analyse a general multiclass queueingsystem with parallel servers and then to develop an approach to optimalload distribution across a network of interconnected stations. Finally,the approach is used for the first time to analyse a class of intensitycontrol problems.
Resumo:
In this paper we propose a subsampling estimator for the distribution ofstatistics diverging at either known rates when the underlying timeseries in strictly stationary abd strong mixing. Based on our results weprovide a detailed discussion how to estimate extreme order statisticswith dependent data and present two applications to assessing financialmarket risk. Our method performs well in estimating Value at Risk andprovides a superior alternative to Hill's estimator in operationalizingSafety First portofolio selection.
Resumo:
A new method of measuring joint angle using a combination of accelerometers and gyroscopes is presented. The method proposes a minimal sensor configuration with one sensor module mounted on each segment. The model is based on estimating the acceleration of the joint center of rotation by placing a pair of virtual sensors on the adjacent segments at the center of rotation. In the proposed technique, joint angles are found without the need for integration, so absolute angles can be obtained which are free from any source of drift. The model considers anatomical aspects and is personalized for each subject prior to each measurement. The method was validated by measuring knee flexion-extension angles of eight subjects, walking at three different speeds, and comparing the results with a reference motion measurement system. The results are very close to those of the reference system presenting very small errors (rms = 1.3, mean = 0.2, SD = 1.1 deg) and excellent correlation coefficients (0.997). The algorithm is able to provide joint angles in real-time, and ready for use in gait analysis. Technically, the system is portable, easily mountable, and can be used for long term monitoring without hindrance to natural activities.
Resumo:
Two main approaches are commonly used to empirically evaluate linear factor pricingmodels: regression and SDF methods, with centred and uncentred versions of the latter.We show that unlike standard two-step or iterated GMM procedures, single-step estimatorssuch as continuously updated GMM yield numerically identical values for prices of risk,pricing errors, Jensen s alphas and overidentifying restrictions tests irrespective of the modelvalidity. Therefore, there is arguably a single approach regardless of the factors being tradedor not, or the use of excess or gross returns. We illustrate our results by revisiting Lustigand Verdelhan s (2007) empirical analysis of currency returns.
Resumo:
This paper proposes a model of financial markets and corporate finance,with asymmetric information and no taxes, where equity issues, Bankdebt and Bond financing may all co-exist in equilibrium. The paperemphasizes the relationship Banking aspect of financial intermediation:firms turn to banks as a source of investment mainly because banks aregood at helping them through times of financial distress. The debtrestructuring service that banks may offer, however, is costly. Therefore,the firms which do not expect to be financially distressed prefer toobtain a cheaper market source of funding through bond or equity issues.This explains why bank lending and bond financing may co-exist inequilibrium. The reason why firms or banks also issue equity in our modelis simply to avoid bankruptcy. Banks have the additional motive that theyneed to satisfy minimum capital adequacy requeriments. Several types ofequilibria are possible, one of which has all the main characteristics ofa "credit crunch". This multiplicity implies that the channels of monetarypolicy may depend on the type of equilibrium that prevails, leadingsometimes to support a "credit view" and other times the classical "moneyview".
Resumo:
This paper argues that any specific utility or disutility for gamblingmust be excluded from expected utility because such a theory is consequentialwhile a pleasure or displeasure for gambling is a matter of process, notof consequences. A (dis)utility for gambling is modeled as a process utilitywhich monotonically combines with expected utility restricted to consequences.This allows for a process (dis)utility for gambling to be revealed. Asan illustration, the model shows how empirical observations in the Allaisparadox can reveal a process disutility of gambling. A more general modelof rational behavior combining processes and consequences is then proposedand discussed.
Resumo:
I discuss several lessons regarding the design and conduct of monetary policy that have emerged out of the New Keynesian research program. Those lessons include the bene.ts of price stability, the gains from commitment about future policies, the importance of nat-ural variables as benchmarks for policy, and the bene.ts of a credible anti-inflationary stance. I also point to one challenge facing NK modelling efforts: the need to come up with relevant sources of policy tradeoffs. A potentially useful approach to meeting that challenge, based on the introduction of real imperfections, is presented.
Resumo:
We study a retail benchmarking approach to determine access prices for interconnected networks. Instead of considering fixed access charges as in the existing literature, we study access pricing rules that determine the access price that network i pays to network j as a linear function of the marginal costs and the retail prices set by both networks. In the case of competition in linear prices, we show that there is a unique linear rule that implements the Ramsey outcome as the unique equilibrium, independently of the underlying demand conditions. In the case of competition in two-part tariffs, we consider a class of access pricing rules, similar to the optimal one under linear prices but based on average retail prices. We show that firms choose the variable price equal to the marginal cost under this class of rules. Therefore, the regulator (or the competition authority) can choose one among the rules to pursue additional objectives such as consumer surplus, network coverage or investment: for instance, we show that both static and dynamic e±ciency can be achieved at the same time.