947 resultados para Random coefficient logit (RCL) model
Resumo:
PURPOSE: To compare the effect of a rat anti-VEGF antibody, administered either by topical or subconjunctival (SC) routes, on a rat model of corneal transplant rejection.METHODS: Twenty-four rats underwent corneal transplantation and were randomized into four treatment groups (n=6 in each group). G1 and G2 received six SC injections (0.02 ml 10 µg/ml) of denatured (G1) or active (G2) anti-VEGF from Day 0 to Day 21 every third day. G3 and G4 were instilled three times a day with denatured (G3) or active (G4) anti-VEGF drops (10 µg/ml) from Day 0 to Day 21. Corneal mean clinical scores (MCSs) of edema (E), transparency (T), and neovessels (nv) were recorded at Days 3, 9, 15, and 21. Quantification of neovessels was performed after lectin staining of vessels on flat mounted corneas.RESULTS: Twenty-one days after surgery, MCSs differed significantly between G1 and G2, but not between G3 and G4, and the rejection rate was significantly reduced in rats receiving active antibodies regardless of the route of administration (G2=50%, G4=66.65% versus G1 and G3=100%; p<0.05). The mean surfaces of neovessels were significantly reduced in groups treated with active anti-VEGF (G2, G4). However, anti-VEGF therapy did not completely suppress corneal neovessels.CONCLUSIONS: Specific rat anti-VEGF antibodies significantly reduced neovascularization and subsequent corneal graft rejection. The SC administration of the anti-VEGF antibody was more effective than topical instillation.
Resumo:
BACKGROUND: The reverse transcription quantitative real-time polymerase chain reaction (RT-qPCR) is a widely used, highly sensitive laboratory technique to rapidly and easily detect, identify and quantify gene expression. Reliable RT-qPCR data necessitates accurate normalization with validated control genes (reference genes) whose expression is constant in all studied conditions. This stability has to be demonstrated.We performed a literature search for studies using quantitative or semi-quantitative PCR in the rat spared nerve injury (SNI) model of neuropathic pain to verify whether any reference genes had previously been validated. We then analyzed the stability over time of 7 commonly used reference genes in the nervous system - specifically in the spinal cord dorsal horn and the dorsal root ganglion (DRG). These were: Actin beta (Actb), Glyceraldehyde-3-phosphate dehydrogenase (GAPDH), ribosomal proteins 18S (18S), L13a (RPL13a) and L29 (RPL29), hypoxanthine phosphoribosyltransferase 1 (HPRT1) and hydroxymethylbilane synthase (HMBS). We compared the candidate genes and established a stability ranking using the geNorm algorithm. Finally, we assessed the number of reference genes necessary for accurate normalization in this neuropathic pain model. RESULTS: We found GAPDH, HMBS, Actb, HPRT1 and 18S cited as reference genes in literature on studies using the SNI model. Only HPRT1 and 18S had been once previously demonstrated as stable in RT-qPCR arrays. All the genes tested in this study, using the geNorm algorithm, presented gene stability values (M-value) acceptable enough for them to qualify as potential reference genes in both DRG and spinal cord. Using the coefficient of variation, 18S failed the 50% cut-off with a value of 61% in the DRG. The two most stable genes in the dorsal horn were RPL29 and RPL13a; in the DRG they were HPRT1 and Actb. Using a 0.15 cut-off for pairwise variations we found that any pair of stable reference gene was sufficient for the normalization process. CONCLUSIONS: In the rat SNI model, we validated and ranked Actb, RPL29, RPL13a, HMBS, GAPDH, HPRT1 and 18S as good reference genes in the spinal cord. In the DRG, 18S did not fulfill stability criteria. The combination of any two stable reference genes was sufficient to provide an accurate normalization.
Resumo:
An incentives based theory of policing is developed which can explain the phenomenon of random “crackdowns,” i.e., intermittent periods of high interdiction/surveillance. For a variety of police objective functions, random crackdowns can be part of the optimal monitoring strategy. We demonstrate support for implications of the crackdown theory using traffic data gathered by the Belgian Police Department and use the model to estimate the deterrence effectof additional resources spent on speeding interdiction.
Resumo:
This paper analyzes the nature of health care provider choice inthe case of patient-initiated contacts, with special reference toa National Health Service setting, where monetary prices are zeroand general practitioners act as gatekeepers to publicly financedspecialized care. We focus our attention on the factors that mayexplain the continuously increasing use of hospital emergencyvisits as opposed to other provider alternatives. An extendedversion of a discrete choice model of demand for patient-initiatedcontacts is presented, allowing for individual and town residencesize differences in perceived quality (preferences) betweenalternative providers and including travel and waiting time asnon-monetary costs. Results of a nested multinomial logit model ofprovider choice are presented. Individual choice betweenalternatives considers, in a repeated nested structure, self-care,primary care, hospital and clinic emergency services. Welfareimplications and income effects are analyzed by computingcompensating variations, and by simulating the effects of userfees by levels of income. Results indicate that compensatingvariation per visit is higher than the direct marginal cost ofemergency visits, and consequently, emergency visits do not appearas an inefficient alternative even for non-urgent conditions.
Resumo:
Customer choice behavior, such as 'buy-up' and 'buy-down', is an importantphe-nomenon in a wide range of industries. Yet there are few models ormethodologies available to exploit this phenomenon within yield managementsystems. We make some progress on filling this void. Specifically, wedevelop a model of yield management in which the buyers' behavior ismodeled explicitly using a multi-nomial logit model of demand. Thecontrol problem is to decide which subset of fare classes to offer ateach point in time. The set of open fare classes then affects the purchaseprobabilities for each class. We formulate a dynamic program todetermine the optimal control policy and show that it reduces to a dynamicnested allocation policy. Thus, the optimal choice-based policy caneasily be implemented in reservation systems that use nested allocationcontrols. We also develop an estimation procedure for our model based onthe expectation-maximization (EM) method that jointly estimates arrivalrates and choice model parameters when no-purchase outcomes areunobservable. Numerical results show that this combined optimization-estimation approach may significantly improve revenue performancerelative to traditional leg-based models that do not account for choicebehavior.
Resumo:
Most methods for small-area estimation are based on composite estimators derived from design- or model-based methods. A composite estimator is a linear combination of a direct and an indirect estimator with weights that usually depend on unknown parameters which need to be estimated. Although model-based small-area estimators are usually based on random-effects models, the assumption of fixed effects is at face value more appropriate.Model-based estimators are justified by the assumption of random (interchangeable) area effects; in practice, however, areas are not interchangeable. In the present paper we empirically assess the quality of several small-area estimators in the setting in which the area effects are treated as fixed. We consider two settings: one that draws samples from a theoretical population, and another that draws samples from an empirical population of a labor force register maintained by the National Institute of Social Security (NISS) of Catalonia. We distinguish two types of composite estimators: a) those that use weights that involve area specific estimates of bias and variance; and, b) those that use weights that involve a common variance and a common squared bias estimate for all the areas. We assess their precision and discuss alternatives to optimizing composite estimation in applications.
Resumo:
The n-octanol/water partition coefficient (log Po/w) is a key physicochemical parameter for drug discovery, design, and development. Here, we present a physics-based approach that shows a strong linear correlation between the computed solvation free energy in implicit solvents and the experimental log Po/w on a cleansed data set of more than 17,500 molecules. After internal validation by five-fold cross-validation and data randomization, the predictive power of the most interesting multiple linear model, based on two GB/SA parameters solely, was tested on two different external sets of molecules. On the Martel druglike test set, the predictive power of the best model (N = 706, r = 0.64, MAE = 1.18, and RMSE = 1.40) is similar to six well-established empirical methods. On the 17-drug test set, our model outperformed all compared empirical methodologies (N = 17, r = 0.94, MAE = 0.38, and RMSE = 0.52). The physical basis of our original GB/SA approach together with its predictive capacity, computational efficiency (1 to 2 s per molecule), and tridimensional molecular graphics capability lay the foundations for a promising predictor, the implicit log P method (iLOGP), to complement the portfolio of drug design tools developed and provided by the SIB Swiss Institute of Bioinformatics.
Resumo:
Many dynamic revenue management models divide the sale period into a finite number of periods T and assume, invoking a fine-enough grid of time, that each period sees at most one booking request. These Poisson-type assumptions restrict the variability of the demand in the model, but researchers and practitioners were willing to overlook this for the benefit of tractability of the models. In this paper, we criticize this model from another angle. Estimating the discrete finite-period model poses problems of indeterminacy and non-robustness: Arbitrarily fixing T leads to arbitrary control values and on the other hand estimating T from data adds an additional layer of indeterminacy. To counter this, we first propose an alternate finite-population model that avoids this problem of fixing T and allows a wider range of demand distributions, while retaining the useful marginal-value properties of the finite-period model. The finite-population model still requires jointly estimating market size and the parameters of the customer purchase model without observing no-purchases. Estimation of market-size when no-purchases are unobservable has rarely been attempted in the marketing or revenue management literature. Indeed, we point out that it is akin to the classical statistical problem of estimating the parameters of a binomial distribution with unknown population size and success probability, and hence likely to be challenging. However, when the purchase probabilities are given by a functional form such as a multinomial-logit model, we propose an estimation heuristic that exploits the specification of the functional form, the variety of the offer sets in a typical RM setting, and qualitative knowledge of arrival rates. Finally we perform simulations to show that the estimator is very promising in obtaining unbiased estimates of population size and the model parameters.
Resumo:
This paper proposes a common and tractable framework for analyzingdifferent definitions of fixed and random effects in a contant-slopevariable-intercept model. It is shown that, regardless of whethereffects (i) are treated as parameters or as an error term, (ii) areestimated in different stages of a hierarchical model, or whether (iii)correlation between effects and regressors is allowed, when the sameinformation on effects is introduced into all estimation methods, theresulting slope estimator is also the same across methods. If differentmethods produce different results, it is ultimately because differentinformation is being used for each methods.
Resumo:
In many research areas (such as public health, environmental contamination, and others) one deals with the necessity of using data to infer whether some proportion (%) of a population of interest is (or one wants it to be) below and/or over some threshold, through the computation of tolerance interval. The idea is, once a threshold is given, one computes the tolerance interval or limit (which might be one or two - sided bounded) and then to check if it satisfies the given threshold. Since in this work we deal with the computation of one - sided tolerance interval, for the two-sided case we recomend, for instance, Krishnamoorthy and Mathew [5]. Krishnamoorthy and Mathew [4] performed the computation of upper tolerance limit in balanced and unbalanced one-way random effects models, whereas Fonseca et al [3] performed it based in a similar ideas but in a tow-way nested mixed or random effects model. In case of random effects model, Fonseca et al [3] performed the computation of such interval only for the balanced data, whereas in the mixed effects case they dit it only for the unbalanced data. For the computation of twosided tolerance interval in models with mixed and/or random effects we recomend, for instance, Sharma and Mathew [7]. The purpose of this paper is the computation of upper and lower tolerance interval in a two-way nested mixed effects models in balanced data. For the case of unbalanced data, as mentioned above, Fonseca et al [3] have already computed upper tolerance interval. Hence, using the notions persented in Fonseca et al [3] and Krishnamoorthy and Mathew [4], we present some results on the construction of one-sided tolerance interval for the balanced case. Thus, in order to do so at first instance we perform the construction for the upper case, and then the construction for the lower case.
Resumo:
Designing an efficient sampling strategy is of crucial importance for habitat suitability modelling. This paper compares four such strategies, namely, 'random', 'regular', 'proportional-stratified' and 'equal -stratified'- to investigate (1) how they affect prediction accuracy and (2) how sensitive they are to sample size. In order to compare them, a virtual species approach (Ecol. Model. 145 (2001) 111) in a real landscape, based on reliable data, was chosen. The distribution of the virtual species was sampled 300 times using each of the four strategies in four sample sizes. The sampled data were then fed into a GLM to make two types of prediction: (1) habitat suitability and (2) presence/ absence. Comparing the predictions to the known distribution of the virtual species allows model accuracy to be assessed. Habitat suitability predictions were assessed by Pearson's correlation coefficient and presence/absence predictions by Cohen's K agreement coefficient. The results show the 'regular' and 'equal-stratified' sampling strategies to be the most accurate and most robust. We propose the following characteristics to improve sample design: (1) increase sample size, (2) prefer systematic to random sampling and (3) include environmental information in the design'
Resumo:
Commuting consists in the fact that an important fraction of workers in developed countries do not reside close to their workplaces but at long distances from them, so they have to travel to their jobs and then back home daily. Although most workers hold a job in the same municipality where they live or in a neighbouring one, an important fraction of workers face long daily trips to get to their workplace and then back home.Even if we divide Catalonia (Spain) in small aggregations of municipalities, trying to make them as close to local labour markets as possible, we will find out that some of them have a positive commuting balance, attracting many workers from other areas and providing local jobs for almost all their resident workers. On the other side, other zones seem to be mostly residential, so an important fraction of their resident workers hold jobs in different local labour markets. Which variables influence an area¿s role as an attraction pole or a residential zone? In previous papers (Artís et al, 1998a, 2000; Romaní, 1999) we have brought out the main individual variables that influence commuting by analysing a sample of Catalan workers and their commuting decisions. In this paper we perform an analysis of the territorial variables that influence commuting, using data for aggregate commuting flows in Catalonia from the 1991 and 1996 Spanish Population Censuses.These variables influence commuting in two different ways: a zone with a dense, welldeveloped economical structure will have a high density of jobs. Work demand cannot be fulfilled with resident workers, so it spills over local boundaries. On the other side, this economical activity has a series of side-effects like pollution, congestion or high land prices which make these areas less desirable to live in. Workers who can afford it may prefer to live in less populated, less congested zones, where they can find cheaper land, larger homes and a better quality of life. The penalty of this decision is an increased commuting time. Our aim in this paper is to highlight the influence of local economical structure and amenities endowment in the workplace-residence location decision. A place-to-place logit commuting models is estimated for 1991 and 1996 in order to find the economical and amenities variables with higher influence in commuting decisions. From these models, we can outline a first approximation to the evolution of these variables in the 1986-1996 period. Data have been obtained from aggregate flow travel-matrix from the 1986, 1991 and 1996 Spanish Population Censuses
Resumo:
Commuting consists in the fact that an important fraction of workers in developed countries do not reside close to their workplaces but at long distances from them, so they have to travel to their jobs and then back home daily. Although most workers hold a job in the same municipality where they live or in a neighbouring one, an important fraction of workers face long daily trips to get to their workplace and then back home.Even if we divide Catalonia (Spain) in small aggregations of municipalities, trying to make them as close to local labour markets as possible, we will find out that some of them have a positive commuting balance, attracting many workers from other areas and providing local jobs for almost all their resident workers. On the other side, other zones seem to be mostly residential, so an important fraction of their resident workers hold jobs in different local labour markets. Which variables influence an area¿s role as an attraction pole or a residential zone? In previous papers (Artís et al, 1998a, 2000; Romaní, 1999) we have brought out the main individual variables that influence commuting by analysing a sample of Catalan workers and their commuting decisions. In this paper we perform an analysis of the territorial variables that influence commuting, using data for aggregate commuting flows in Catalonia from the 1991 and 1996 Spanish Population Censuses.These variables influence commuting in two different ways: a zone with a dense, welldeveloped economical structure will have a high density of jobs. Work demand cannot be fulfilled with resident workers, so it spills over local boundaries. On the other side, this economical activity has a series of side-effects like pollution, congestion or high land prices which make these areas less desirable to live in. Workers who can afford it may prefer to live in less populated, less congested zones, where they can find cheaper land, larger homes and a better quality of life. The penalty of this decision is an increased commuting time. Our aim in this paper is to highlight the influence of local economical structure and amenities endowment in the workplace-residence location decision. A place-to-place logit commuting models is estimated for 1991 and 1996 in order to find the economical and amenities variables with higher influence in commuting decisions. From these models, we can outline a first approximation to the evolution of these variables in the 1986-1996 period. Data have been obtained from aggregate flow travel-matrix from the 1986, 1991 and 1996 Spanish Population Censuses
Resumo:
Aim: When planning SIRT using 90Y microspheres, the partition model is used to refine the activity calculated by the body surface area (BSA) method to potentially improve the safety and efficacy of treatment. For this partition model dosimetry, accurate determination of mean tumor-to-normal liver ratio (TNR) is critical since it directly impacts absorbed dose estimates. This work aimed at developing and assessing a reliable methodology for the calculation of 99mTc-MAA SPECT/CT-derived TNR ratios based on phantom studies. Materials and methods: IQ NEMA (6 hot spheres) and Kyoto liver phantoms with different hot/background activity concentration ratios were imaged on a SPECT/CT (GE Infinia Hawkeye 4). For each reconstruction with the IQ phantom, TNR quantification was assessed in terms of relative recovery coefficients (RC) and image noise was evaluated in terms of coefficient of variation (COV) in the filled background. RCs were compared using OSEM with Hann, Butterworth and Gaussian filters, as well as FBP reconstruction algorithms. Regarding OSEM, RCs were assessed by varying different parameters independently, such as the number of iterations (i) and subsets (s) and the cut-off frequency of the filter (fc). The influence of the attenuation and diffusion corrections was also investigated. Furthermore, both 2D-ROIs and 3D-VOIs contouring were compared. For this purpose, dedicated Matlab© routines were developed in-house for automatic 2D-ROI/3D-VOI determination to reduce intra-user and intra-slice variability. Best reconstruction parameters and RCs obtained with the IQ phantom were used to recover corrected TNR in case of the Kyoto phantom for arbitrary hot-lesion size. In addition, we computed TNR volume histograms to better assess uptake heterogeneityResults: The highest RCs were obtained with OSEM (i=2, s=10) coupled with the Butterworth filter (fc=0.8). Indeed, we observed a global 20% RC improvement over other OSEM settings and a 50% increase as compared to the best FBP reconstruction. In any case, both attenuation and diffusion corrections must be applied, thus improving RC while preserving good image noise (COV<10%). Both 2D-ROI and 3D-VOI analysis lead to similar results. Nevertheless, we recommend using 3D-VOI since tumor uptake regions are intrinsically 3D. RC-corrected TNR values lie within 17% around the true value, substantially improving the evaluation of small volume (<15 mL) regions. Conclusions: This study reports the multi-parameter optimization of 99mTc MAA SPECT/CT images reconstruction in planning 90Y dosimetry for SIRT. In phantoms, accurate quantification of TNR was obtained using OSEM coupled with Butterworth and RC correction.
Resumo:
Spiral chemical waves subjected to a spatiotemporal random excitability are experimentally and numerically investigated in relation to the light-sensitive Belousov-Zhabotinsky reaction. Brownian motion is identified and characterized by an effective diffusion coefficient which shows a rather complex dependence on the time and length scales of the noise relative to those of the spiral. A kinematically based model is proposed whose results are in good qualitative agreement with experiments and numerics.