884 resultados para effort estimation
Resumo:
Restriction site-associated DNA sequencing (RADseq) provides researchers with the ability to record genetic polymorphism across thousands of loci for nonmodel organisms, potentially revolutionizing the field of molecular ecology. However, as with other genotyping methods, RADseq is prone to a number of sources of error that may have consequential effects for population genetic inferences, and these have received only limited attention in terms of the estimation and reporting of genotyping error rates. Here we use individual sample replicates, under the expectation of identical genotypes, to quantify genotyping error in the absence of a reference genome. We then use sample replicates to (i) optimize de novo assembly parameters within the program Stacks, by minimizing error and maximizing the retrieval of informative loci; and (ii) quantify error rates for loci, alleles and single-nucleotide polymorphisms. As an empirical example, we use a double-digest RAD data set of a nonmodel plant species, Berberis alpina, collected from high-altitude mountains in Mexico.
Resumo:
The objective of this paper is to re-evaluate the attitude to effort of a risk-averse decision-maker in an evolving environment. In the classic analysis, the space of efforts is generally discretized. More realistic, this new approach emploies a continuum of effort levels. The presence of multiple possible efforts and performance levels provides a better basis for explaining real economic phenomena. The traditional approach (see, Laffont, J. J. & Tirole, J., 1993, Salanie, B., 1997, Laffont, J.J. and Martimort, D, 2002, among others) does not take into account the potential effect of the system dynamics on the agent's behavior to effort over time. In the context of a Principal-agent relationship, not only the incentives of the Principal can determine the private agent to allocate a good effort, but also the evolution of the dynamic system. The incentives can be ineffective when the environment does not incite the agent to invest a good effort. This explains why, some effici
Resumo:
According to the hypothesis of Traub, also known as the 'formula of Traub', postmortem values of glucose and lactate found in the cerebrospinal fluid or vitreous humor are considered indicators of antemortem blood glucose levels. However, because the lactate concentration increases in the vitreous and cerebrospinal fluid after death, some authors postulated that using the sum value to estimate antemortem blood glucose levels could lead to an overestimation of the cases of glucose metabolic disorders with fatal outcomes, such as diabetic ketoacidosis. The aim of our study, performed on 470 consecutive forensic cases, was to ascertain the advantages of the sum value to estimate antemortem blood glucose concentrations and, consequently, to rule out fatal diabetic ketoacidosis as the cause of death. Other biochemical parameters, such as blood 3-beta-hydroxybutyrate, acetoacetate, acetone, glycated haemoglobin and urine glucose levels, were also determined. In addition, postmortem native CT scan, autopsy, histology, neuropathology and toxicology were performed to confirm diabetic ketoacidosis as the cause of death. According to our results, the sum value does not add any further information for the estimation of antemortem blood glucose concentration. The vitreous glucose concentration appears to be the most reliable marker to estimate antemortem hyperglycaemia and, along with the determination of other biochemical markers (such as blood acetone and 3-beta-hydroxybutyrate, urine glucose and glycated haemoglobin), to confirm diabetic ketoacidosis as the cause of death.
Resumo:
Background : In the present article, we propose an alternative method for dealing with negative affectivity (NA) biases in research, while investigating the association between a deleterious psychosocial environment at work and poor mental health. First, we investigated how strong NA must be to cause an observed correlation between the independent and dependent variables. Second, we subjectively assessed whether NA can have a large enough impact on a large enough number of subjects to invalidate the observed correlations between dependent and independent variables.Methods : We simulated 10,000 populations of 300 subjects each, using the marginal distribution of workers in an actual population that had answered the Siegrist's questionnaire on effort and reward imbalance (ERI) and the General Health Questionnaire (GHQ).Results : The results of the present study suggested that simulated NA has a minimal effect on the mean scores for effort and reward. However, the correlations between the effort and reward imbalance (ERI) ratio and the GHQ score might be important, even in simulated populations with a limited NA.Conclusions : When investigating the relationship between the ERI ratio and the GHQ score, we suggest the following rules for the interpretation of the results: correlations with an explained variance of 5% and below should be considered with caution; correlations with an explained variance between 5% and 10% may result from NA, although this effect does not seem likely; and correlations with an explained variance of 10% and above are not likely to be the result of NA biases. [Authors]
Resumo:
BACKGROUND: Recommendations for statin use for primary prevention of coronary heart disease (CHD) are based on estimation of the 10- year CHD risk. We compared the 10-year CHD risk assessments and eligibility percentages for statin therapy using three scoring algorithms currently used in Europe. METHODS: We studied 5683 women and men, aged 35-75, without overt cardiovascular disease (CVD), in a population-based study in Switzerland. We compared the 10-year CHD risk using three scoring schemes, i.e., the Framingham risk score (FRS) from the U.S. National Cholesterol Education Program's Adult Treatment Panel III (ATP III), the PROCAM scoring scheme from the International Atherosclerosis Society (IAS), and the European risk SCORE for low-risk countries, without and with extrapolation to 60 years as recommended by the European Society of Cardiology guidelines (ESC). With FRS and PROCAM, high-risk was defined as a 10- year risk of fatal or non-fatal CHD>20% and a 10-year risk of fatal CVD≥5% with SCORE. We compared the proportions of high-risk participants and eligibility for statin use according to these three schemes. For each guideline, we estimated the impact of increased statin use from current partial compliance to full compliance on potential CHD deaths averted over 10 years, using a success proportion of 27% for statins. RESULTS: Participants classified at high-risk (both genders) were 5.8% according to FRS and 3.0% to the PROCAM, whereas the European risk SCORE classified 12.5% at high-risk (15.4% with extrapolation to 60 years). For the primary prevention of CHD, 18.5% of participants were eligible for statin therapy using ATP III, 16.6% using IAS, and 10.3% using ESC (13.0% with extrapolation) because ESC guidelines recommend statin therapy only in high-risk subjects. In comparison with IAS, agreement to identify eligible adults for statins was good with ATP III, but moderate with ESC. Using a population perspective, a full compliance with ATP III guidelines would reduce up to 17.9% of the 24′ 310 CHD deaths expected over 10 years in Switzerland, 17.3% with IAS and 10.8% with ESC (11.5% with extrapolation). CONCLUSIONS: Full compliance with guidelines for statin therapy would result in substantial health benefits, but proportions of high-risk adults and eligible adults for statin use varied substantially depending on the scoring systems and corresponding guidelines used for estimating CHD risk in Europe.
Resumo:
This paper does two things. First, it presents alternative approaches to the standard methods of estimating productive efficiency using a production function. It favours a parametric approach (viz. the stochastic production frontier approach) over a nonparametric approach (e.g. data envelopment analysis); and, further, one that provides a statistical explanation of efficiency, as well as an estimate of its magnitude. Second, it illustrates the favoured approach (i.e. the ‘single stage procedure’) with estimates of two models of explained inefficiency, using data from the Thai manufacturing sector, after the crisis of 1997. Technical efficiency is modelled as being dependent on capital investment in three major areas (viz. land, machinery and office appliances) where land is intended to proxy the effects of unproductive, speculative capital investment; and both machinery and office appliances are intended to proxy the effects of productive, non-speculative capital investment. The estimates from these models cast new light on the five-year long, post-1997 crisis period in Thailand, suggesting a structural shift from relatively labour intensive to relatively capital intensive production in manufactures from 1998 to 2002.
Resumo:
Until recently, much effort has been devoted to the estimation of panel data regression models without adequate attention being paid to the drivers of diffusion and interaction across cross section and spatial units. We discuss some new methodologies in this emerging area and demonstrate their use in measurement and inferences on cross section and spatial interactions. Specifically, we highlight the important distinction between spatial dependence driven by unobserved common factors and those based on a spatial weights matrix. We argue that, purely factor driven models of spatial dependence may be somewhat inadequate because of their connection with the exchangeability assumption. Limitations and potential enhancements of the existing methods are discussed, and several directions for new research are highlighted.
Resumo:
In this paper we analyse a simple two-person sequential-move contest game with heterogeneous players. Assuming that the heterogeneity could be the consequence of past discrimination, we study the effects of implementation of affirmative action policy, which tackles this heterogeneity by compensating discriminated players, and compare them with the situation in which the heterogeneity is ignored and the contestants are treated equally. In our analysis we consider different orders of moves. We show that the order of moves of contestants is a very important factor in determination of the effects of the implementation of the affirmative action policy. We also prove that in such cases a significant role is played by the level of the heterogeneity of individuals. In particular, in contrast to the present-in-the-literature predictions, we demonstrate that as a consequence of the interplay of these two factors, the response to the implementation of the affirmative action policy option may be the decrease in the total equilibrium effort level of the contestants in comparison to the unbiased contest game.
Resumo:
This study addresses the issue of the presence of a unit root on the growth rate estimation by the least-squares approach. We argue that when the log of a variable contains a unit root, i.e., it is not stationary then the growth rate estimate from the log-linear trend model is not a valid representation of the actual growth of the series. In fact, under such a situation, we show that the growth of the series is the cumulative impact of a stochastic process. As such the growth estimate from such a model is just a spurious representation of the actual growth of the series, which we refer to as a “pseudo growth rate”. Hence such an estimate should be interpreted with caution. On the other hand, we highlight that the statistical representation of a series as containing a unit root is not easy to separate from an alternative description which represents the series as fundamentally deterministic (no unit root) but containing a structural break. In search of a way around this, our study presents a survey of both the theoretical and empirical literature on unit root tests that takes into account possible structural breaks. We show that when a series is trendstationary with breaks, it is possible to use the log-linear trend model to obtain well defined estimates of growth rates for sub-periods which are valid representations of the actual growth of the series. Finally, to highlight the above issues, we carry out an empirical application whereby we estimate meaningful growth rates of real wages per worker for 51 industries from the organised manufacturing sector in India for the period 1973-2003, which are not only unbiased but also asymptotically efficient. We use these growth rate estimates to highlight the evolving inter-industry wage structure in India.
Resumo:
It has been observed that university professors sometimes become less research active in their later years. This paper models the decision to become inactive as a utility maximising problem under conditions of uncertainty and derives an age-dependent activity condition for the level of research productivity. The model implies that professors who are close to retirement age are more likely to become inactive when faced with setbacks in their research while those who continue research do not lower their activity levels. Using data from the University of Iceland, we find support for the model’s predictions. The model suggests that universities should induce their older faculty to remain research active by striving to make their research more productive and enjoyable, maintaining peer pressure, reducing job security and offering higher performance related pay.
Resumo:
While estimates of models with spatial interaction are very sensitive to the choice of spatial weights, considerable uncertainty surrounds de nition of spatial weights in most studies with cross-section dependence. We show that, in the spatial error model the spatial weights matrix is only partially identi ed, and is fully identifi ed under the structural constraint of symmetry. For the spatial error model, we propose a new methodology for estimation of spatial weights under the assumption of symmetric spatial weights, with extensions to other important spatial models. The methodology is applied to regional housing markets in the UK, providing an estimated spatial weights matrix that generates several new hypotheses about the economic and socio-cultural drivers of spatial di¤usion in housing demand.
Resumo:
Developing a predictive understanding of subsurface flow and transport is complicated by the disparity of scales across which controlling hydrological properties and processes span. Conventional techniques for characterizing hydrogeological properties (such as pumping, slug, and flowmeter tests) typically rely on borehole access to the subsurface. Because their spatial extent is commonly limited to the vicinity near the wellbores, these methods often cannot provide sufficient information to describe key controls on subsurface flow and transport. The field of hydrogeophysics has evolved in recent years to explore the potential that geophysical methods hold for improving the quantification of subsurface properties and processes relevant for hydrological investigations. This chapter is intended to familiarize hydrogeologists and water-resource professionals with the state of the art as well as existing challenges associated with hydrogeophysics. We provide a review of the key components of hydrogeophysical studies, which include: geophysical methods commonly used for shallow subsurface characterization; petrophysical relationships used to link the geophysical properties to hydrological properties and state variables; and estimation or inversion methods used to integrate hydrological and geophysical measurements in a consistent manner. We demonstrate the use of these different geophysical methods, petrophysical relationships, and estimation approaches through several field-scale case studies. Among other applications, the case studies illustrate the use of hydrogeophysical approaches to quantify subsurface architecture that influence flow (such as hydrostratigraphy and preferential pathways); delineate anomalous subsurface fluid bodies (such as contaminant plumes); monitor hydrological processes (such as infiltration, freshwater-seawater interface dynamics, and flow through fractures); and estimate hydrological properties (such as hydraulic conductivity) and state variables (such as water content). The case studies have been chosen to illustrate how hydrogeophysical approaches can yield insights about complex subsurface hydrological processes, provide input that improves flow and transport predictions, and provide quantitative information over field-relevant spatial scales. The chapter concludes by describing existing hydrogeophysical challenges and associated research needs. In particular, we identify the area of quantitative watershed hydrogeophysics as a frontier area, where significant effort is required to advance the estimation of hydrological properties and processes (and their uncertainties) over spatial scales relevant to the management of water resources and contaminants.
Resumo:
Lean meat percentage (LMP) is an important carcass quality parameter. The aim of this work is to obtain a calibration equation for the Computed Tomography (CT) scans with the Partial Least Square Regression (PLS) technique in order to predict the LMP of the carcass and the different cuts and to study and compare two different methodologies of the selection of the variables (Variable Importance for Projection — VIP- and Stepwise) to be included in the prediction equation. The error of prediction with cross-validation (RMSEPCV) of the LMP obtained with PLS and selection based on VIP value was 0.82% and for stepwise selection it was 0.83%. The prediction of the LMP scanning only the ham had a RMSEPCV of 0.97% and if the ham and the loin were scanned the RMSEPCV was 0.90%. Results indicate that for CT data both VIP and stepwise selection are good methods. Moreover the scanning of only the ham allowed us to obtain a good prediction of the LMP of the whole carcass.