906 resultados para Classical measurement error model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

For industrialised economy of ourdays, remanufacturing represents perhaps the largest unexploited resource and opportunity for realising a greater growth of the economy in an environmental-conscious manner. The aim of this paper is to investigate of the impact of remanufacturing in the economy from an economic-efficiency point of view. In static context this phenomenon was analysed in the literature. We use the multi-sector input–output framework in a dynamic context to study intra-period relationships of the sectors of economy. We extend the classical dynamic input–output model taking into consideration the activity of remanufacturing .We try to answer the question, whether the remanufacturing/reuse increases the growth possibility of an economy. We expose a sufficient condition concerning the effectivity of an economy with remanufacturing. By this evaluation we analyse a possible sustainable development of the economy on the basis of the product recovery management of industries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of the paper is to investigate the impact of recycling on the use of non-renewable resources in the economy. The paper tries to generalize the classical dynamic input–output model. In this regard we extend the standard Leontief model with the balance equation of recycled products, and we establish some properties of this augmented model. We investigate how recycling extends the availability of non-renewable natural resources for the next generations in an inter-industry framework. Supposing a balanced growth both for production and consumption, we examine the existence of the balanced growth path of this model and compare the results to the classical Leontief model. We try to answer the question whether recycling/reuse increases the growth possibility of an economy. Finally, we illustrate our results with a simple numerical example. Thus, we analyze a possible sustainable development of the economy on the basis of the product recovery management of industries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ebben a tanulmányban a klasszikus Harrod növekedési modellt nemlineáris kiterjesztéssel, keynesi és schumpeteri tradíciók bevezetésével reprezentatív ügynök modellbe alakítjuk. A híres Lucas kritika igazolásaként megmutatjuk, hogy az intrinsic gazdasági növekedési ütemek trajektóriái vagy egy turbulens káoszba szóródnak szét, vagy egy nagyméretű rendhez vezetnek, ami elsődlegesen a megfelelő fogyasztási függvény típusától függ, s bizonyos paraméterek piaci értékei, pedig csak másodlagos szerepet játszanak. A másik meglepő eredmény empirikus, ami szerint külkereskedelmi többlet, a hazai valuta bizonyos devizapiaci értékei mellett, különös attraktorokat generálhat. _____ In this paper the classical Harrodian growth model is transformed into a representative agent model by its nonlinear extensions and the Keynesian and Schumpeterian traditions. For the proof of the celebrated Lucas critique it is shown that the trajectories of intrinsic economic growth rates either are scattered into a turbulent chaos or lead to a large scale order. It depends on the type of the appropriate consumption function, and the market values of some parameters are playing only secondary role.Another surprising result is empirical: the international trade su±cit may generate strange attractors under some exchange rate values.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The current study was designed to build on and extend the existing knowledge base of factors that cause, maintain, and influence child molestation. Theorized links among the type of offender and the offender's levels of moral development and social competence in the perpetration of child molestation were investigated. The conceptual framework for the study is based on the cognitive developmental stages of moral development as proposed by Kohlberg, the unified theory, or Four-Preconditions Model, of child molestation as proposed by Finkelhor, and the Information-Processing Model of Social Skills as proposed by McFall. The study sample consisted of 127 adult male child molesters participating in outpatient group therapy. All subjects completed a Self-Report Questionnaire which included questions designed to obtain relevant demographic data, questions similar to those used by the researchers for the Massachusetts Treatment Center: Child Molester Typology 3's social competency dimension, the Defining Issues Test (DIT) short form, the Social Avoidance and Distress Scale (SADS), the Rathus Assertiveness Schedule (RAS), and the Questionnaire Measure of Empathic Tendency (Empathy Scale). Data were analyzed utilizing confirmatory factor analysis, t-tests, and chi-square statistics. Partial support was found for the hypothesis that moral development is a separate but correlated construct from social competence. As predicted, although the actual mean score differences were small, a statistically significant difference was found in the current study between the mean DITP scores of the subject sample and that of the general male population, suggesting that child molesters, as a group, function at a lower level of moral development than does the general male population, and the situational offenders in the study sample demonstrated a statistically significantly higher level of moral development than the preferential offenders. The data did not support the hypothesis that situational offenders will demonstrate lower levels of social competence than preferential offenders. Relatively little significance is placed on this finding, however, because the measure for the social competency variable was likely subject to considerable measurement error in that the items used as indicators were not clearly defined. The last hypothesis, which involved the potential differences in social anxiety, assertion skills, and empathy between the situational and preferential offender types, was not supported by the data. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Performing experiments on small-scale quantum computers is certainly a challenging endeavor. Many parameters need to be optimized to achieve high-fidelity operations. This can be done efficiently for operations acting on single qubits, as errors can be fully characterized. For multiqubit operations, though, this is no longer the case, as in the most general case, analyzing the effect of the operation on the system requires a full state tomography for which resources scale exponentially with the system size. Furthermore, in recent experiments, additional electronic levels beyond the two-level system encoding the qubit have been used to enhance the capabilities of quantum-information processors, which additionally increases the number of parameters that need to be controlled. For the optimization of the experimental system for a given task (e.g., a quantum algorithm), one has to find a satisfactory error model and also efficient observables to estimate the parameters of the model. In this manuscript, we demonstrate a method to optimize the encoding procedure for a small quantum error correction code in the presence of unknown but constant phase shifts. The method, which we implement here on a small-scale linear ion-trap quantum computer, is readily applicable to other AMO platforms for quantum-information processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

My dissertation has three chapters which develop and apply microeconometric tech- niques to empirically relevant problems. All the chapters examines the robustness issues (e.g., measurement error and model misspecification) in the econometric anal- ysis. The first chapter studies the identifying power of an instrumental variable in the nonparametric heterogeneous treatment effect framework when a binary treat- ment variable is mismeasured and endogenous. I characterize the sharp identified set for the local average treatment effect under the following two assumptions: (1) the exclusion restriction of an instrument and (2) deterministic monotonicity of the true treatment variable in the instrument. The identification strategy allows for general measurement error. Notably, (i) the measurement error is nonclassical, (ii) it can be endogenous, and (iii) no assumptions are imposed on the marginal distribution of the measurement error, so that I do not need to assume the accuracy of the measure- ment. Based on the partial identification result, I provide a consistent confidence interval for the local average treatment effect with uniformly valid size control. I also show that the identification strategy can incorporate repeated measurements to narrow the identified set, even if the repeated measurements themselves are endoge- nous. Using the the National Longitudinal Study of the High School Class of 1972, I demonstrate that my new methodology can produce nontrivial bounds for the return to college attendance when attendance is mismeasured and endogenous.

The second chapter, which is a part of a coauthored project with Federico Bugni, considers the problem of inference in dynamic discrete choice problems when the structural model is locally misspecified. We consider two popular classes of estimators for dynamic discrete choice models: K-step maximum likelihood estimators (K-ML) and K-step minimum distance estimators (K-MD), where K denotes the number of policy iterations employed in the estimation problem. These estimator classes include popular estimators such as Rust (1987)’s nested fixed point estimator, Hotz and Miller (1993)’s conditional choice probability estimator, Aguirregabiria and Mira (2002)’s nested algorithm estimator, and Pesendorfer and Schmidt-Dengler (2008)’s least squares estimator. We derive and compare the asymptotic distributions of K- ML and K-MD estimators when the model is arbitrarily locally misspecified and we obtain three main results. In the absence of misspecification, Aguirregabiria and Mira (2002) show that all K-ML estimators are asymptotically equivalent regardless of the choice of K. Our first result shows that this finding extends to a locally misspecified model, regardless of the degree of local misspecification. As a second result, we show that an analogous result holds for all K-MD estimators, i.e., all K- MD estimator are asymptotically equivalent regardless of the choice of K. Our third and final result is to compare K-MD and K-ML estimators in terms of asymptotic mean squared error. Under local misspecification, the optimally weighted K-MD estimator depends on the unknown asymptotic bias and is no longer feasible. In turn, feasible K-MD estimators could have an asymptotic mean squared error that is higher or lower than that of the K-ML estimators. To demonstrate the relevance of our asymptotic analysis, we illustrate our findings using in a simulation exercise based on a misspecified version of Rust (1987) bus engine problem.

The last chapter investigates the causal effect of the Omnibus Budget Reconcil- iation Act of 1993, which caused the biggest change to the EITC in its history, on unemployment and labor force participation among single mothers. Unemployment and labor force participation are difficult to define for a few reasons, for example, be- cause of marginally attached workers. Instead of searching for the unique definition for each of these two concepts, this chapter bounds unemployment and labor force participation by observable variables and, as a result, considers various competing definitions of these two concepts simultaneously. This bounding strategy leads to partial identification of the treatment effect. The inference results depend on the construction of the bounds, but they imply positive effect on labor force participa- tion and negligible effect on unemployment. The results imply that the difference- in-difference result based on the BLS definition of unemployment can be misleading

due to misclassification of unemployment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Surveys can collect important data that inform policy decisions and drive social science research. Large government surveys collect information from the U.S. population on a wide range of topics, including demographics, education, employment, and lifestyle. Analysis of survey data presents unique challenges. In particular, one needs to account for missing data, for complex sampling designs, and for measurement error. Conceptually, a survey organization could spend lots of resources getting high-quality responses from a simple random sample, resulting in survey data that are easy to analyze. However, this scenario often is not realistic. To address these practical issues, survey organizations can leverage the information available from other sources of data. For example, in longitudinal studies that suffer from attrition, they can use the information from refreshment samples to correct for potential attrition bias. They can use information from known marginal distributions or survey design to improve inferences. They can use information from gold standard sources to correct for measurement error.

This thesis presents novel approaches to combining information from multiple sources that address the three problems described above.

The first method addresses nonignorable unit nonresponse and attrition in a panel survey with a refreshment sample. Panel surveys typically suffer from attrition, which can lead to biased inference when basing analysis only on cases that complete all waves of the panel. Unfortunately, the panel data alone cannot inform the extent of the bias due to attrition, so analysts must make strong and untestable assumptions about the missing data mechanism. Many panel studies also include refreshment samples, which are data collected from a random sample of new

individuals during some later wave of the panel. Refreshment samples offer information that can be utilized to correct for biases induced by nonignorable attrition while reducing reliance on strong assumptions about the attrition process. To date, these bias correction methods have not dealt with two key practical issues in panel studies: unit nonresponse in the initial wave of the panel and in the

refreshment sample itself. As we illustrate, nonignorable unit nonresponse

can significantly compromise the analyst's ability to use the refreshment samples for attrition bias correction. Thus, it is crucial for analysts to assess how sensitive their inferences---corrected for panel attrition---are to different assumptions about the nature of the unit nonresponse. We present an approach that facilitates such sensitivity analyses, both for suspected nonignorable unit nonresponse

in the initial wave and in the refreshment sample. We illustrate the approach using simulation studies and an analysis of data from the 2007-2008 Associated Press/Yahoo News election panel study.

The second method incorporates informative prior beliefs about

marginal probabilities into Bayesian latent class models for categorical data.

The basic idea is to append synthetic observations to the original data such that

(i) the empirical distributions of the desired margins match those of the prior beliefs, and (ii) the values of the remaining variables are left missing. The degree of prior uncertainty is controlled by the number of augmented records. Posterior inferences can be obtained via typical MCMC algorithms for latent class models, tailored to deal efficiently with the missing values in the concatenated data.

We illustrate the approach using a variety of simulations based on data from the American Community Survey, including an example of how augmented records can be used to fit latent class models to data from stratified samples.

The third method leverages the information from a gold standard survey to model reporting error. Survey data are subject to reporting error when respondents misunderstand the question or accidentally select the wrong response. Sometimes survey respondents knowingly select the wrong response, for example, by reporting a higher level of education than they actually have attained. We present an approach that allows an analyst to model reporting error by incorporating information from a gold standard survey. The analyst can specify various reporting error models and assess how sensitive their conclusions are to different assumptions about the reporting error process. We illustrate the approach using simulations based on data from the 1993 National Survey of College Graduates. We use the method to impute error-corrected educational attainments in the 2010 American Community Survey using the 2010 National Survey of College Graduates as the gold standard survey.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Atomic ions trapped in micro-fabricated surface traps can be utilized as a physical platform with which to build a quantum computer. They possess many of the desirable qualities of such a device, including high fidelity state preparation and readout, universal logic gates, long coherence times, and can be readily entangled with each other through photonic interconnects. The use of optical cavities integrated with trapped ion qubits as a photonic interface presents the possibility for order of magnitude improvements in performance in several key areas of their use in quantum computation. The first part of this thesis describes the design and fabrication of a novel surface trap for integration with an optical cavity. The trap is custom made on a highly reflective mirror surface and includes the capability of moving the ion trap location along all three trap axes with nanometer scale precision. The second part of this thesis demonstrates the suitability of small micro-cavities formed from laser ablated fused silica substrates with radii of curvature in the 300-500 micron range for use with the mirror trap as part of an integrated ion trap cavity system. Quantum computing applications for such a system include dramatic improvements in the photonic entanglement rate up to 10 kHz, the qubit measurement time down to 1 microsecond, and the measurement error rates down to the 10e-5 range. The final part of this thesis details a performance simulator for exploring the physical resource requirements and performance demands to scale such a quantum computer to sizes capable of performing quantum algorithms beyond the limits of classical computation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Energy efficiency improvement has been a key objective of China’s long-term energy policy. In this paper, we derive single-factor technical energy efficiency (abbreviated as energy efficiency) in China from multi-factor efficiency estimated by means of a translog production function and a stochastic frontier model on the basis of panel data on 29 Chinese provinces over the period 2003–2011. We find that average energy efficiency has been increasing over the research period and that the provinces with the highest energy efficiency are at the east coast and the ones with the lowest in the west, with an intermediate corridor in between. In the analysis of the determinants of energy efficiency by means of a spatial Durbin error model both factors in the own province and in first-order neighboring provinces are considered. Per capita income in the own province has a positive effect. Furthermore, foreign direct investment and population density in the own province and in neighboring provinces have positive effects, whereas the share of state-owned enterprises in Gross Provincial Product in the own province and in neighboring provinces has negative effects. From the analysis it follows that inflow of foreign direct investment and reform of state-owned enterprises are important policy handles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We develop a body size growth model of Northern cod (Gadus morhua) in Northwest Atlantic Fisheries Organization (NAFO) Divisions 2J3KL during 2009-2013. We use individual length-at-age data from the bottom trawl survey in these divisions during 2009–2013. We use the Von Bertalanffy (VonB) model extended to account for between-individual variations in growth, and variations that may be caused by the methods which fish are caught and sampled for length and age measurements. We assume between-individual variation in growth appears because individuals grow at a different rate (k), and they achieve different maximum sizes (l∞). We also included measurement error in length and age in our model since ignoring these errors can lead to biased estimates of the growth parameters. We use the structural errors-invariables (SEV) approach to estimate individual variation in growth, ageing error variation, and the true age distribution of the fish. Our results shows the existence of individual variation in growth and ME in age. According to the negative log likelihood ratio (NLLR) test, the best model indicated: 1) different growth patterns across divisions and years. 2) Between individual variation in growth is the same for the same division across years. 3) The ME in age and true age distribution are different for each year and division.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A traceless variant of the Johnson-Segalman viscoelastic model is presented. The viscoelastic extra stress tensor is de composed into its traceless (deviatoric) and spherical parts, leading to a reformulation of the classical Johnson-Segalman model. The equivalente of the two models is established comparing model predictions for simple test cases. The new model is validated using several 2D benchmark problems.The structure and behavior of the new model are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this study was to estimate the regressions calibration for the dietary data that were measured using the quantitative food frequency questionnaire (QFFQ) in the Natural History of HPV Infection in Men: the HIM Study in Brazil. A sample of 98 individuals from the HIM study answered one QFFQ and three 24-hour recalls (24HR) at interviews. The calibration was performed using linear regression analysis in which the 24HR was the dependent variable and the QFFQ was the independent variable. Age, body mass index, physical activity, income and schooling were used as adjustment variables in the models. The geometric means between the 24HR and the calibration-corrected QFFQ were statistically equal. The dispersion graphs between the instruments demonstrate increased correlation after making the correction, although there is greater dispersion of the points with worse explanatory power of the models. Identification of the regressions calibration for the dietary data of the HIM study will make it possible to estimate the effect of the diet on HPV infection, corrected for the measurement error of the QFFQ.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O objetivo foi estimar as regressões de calibração dos dados dietéticos mensurados pelo questionário quantitativo de freqüência alimentar (QQFA) utilizado no Natural History of HPV Infection in Men: o Estudo HIM. Uma amostra de 98 indivíduos do estudo HIM respondeu, por meio de entrevista, a um QQFA e três recordatórios de 24 horas (R24h). A calibração foi feita por meio de análise de regressão linear, tendo os R24h como variável dependente e o QQFA como variável independente. Idade, índice de massa corporal, atividade física, renda e escolaridade foram utilizadas como variáveis de ajuste nos modelos. As médias geométricas dos R24h e do QQFA corrigido pela calibração são estatisticamente iguais. Os gráficos de dispersão entre os instrumentos demonstraram aumento da correlação após a correção dos dados, porém observa-se maior dispersão dos pontos de acordo com a piora do poder explicativo dos modelos. A identificação das regressões de calibração dos dados dietéticos do estudo HIM permitirá a estimativa do efeito da dieta sobre a infecção por HPV, corrigida pelo erro de medida do QQFA

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article presents a kinetic evaluation of froth flotation of ultrafine coal contained in the tailings from a Colombian coal preparation plant. The plant utilizes a dense-medium cyclones and spirals circuit. The tailings contained material that was 63% finer than 14 mu m. Flotation tests were performed with and without coal ""promoters"" (diesel oil or kerosene) to evaluate the kinetics of flotation of coal. It was found that flotation rates were higher when no promoter was added. Different kinetic models were evaluated for the flotation of the coal from the tailings, and it was found that the best fitted model was the classical first-order model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Community awareness of the sustainable use of land, water and vegetation resources is increasing. The sustainable use of these resources is pivotal to sustainable farming systems. However, techniques for monitoring the sustainable management of these resources are poorly understood and untested. We propose a framework to benchmark and monitor resources in the grains industry. Eight steps are listed below to achieve these objectives: (i) define industry issues; (ii) identify the issues through growers, stakeholder and community consultation; (iii) identify indicators (measurable attributes, properties or characteristics) of sustainability through consultation with growers, stakeholders, experts and community members, relating to: crop productivity; resource maintenance/enhancement; biodiversity; economic viability; community viability; and institutional structure; (iv) develop and use selection criteria to select indicators that consider: responsiveness to change; ease of capture; community acceptance and involvement; interpretation; measurement error; stability, frequency and cost of measurement; spatial scale issues; and mapping capability in space and through time. The appropriateness of indicators can be evaluated using a decision making system such as a multiobjective decision support system (MO-DSS, a method to assist in decision making from multiple and conflicting objectives); (v) involve stakeholders and the community in the definition of goals and setting benchmarking and monitoring targets for sustainable farming; (vi) take preventive and corrective/remedial action; (vii) evaluate effectiveness of actions taken; and (viii) revise indicators as part of a continual improvement principle designed to achieve best management practice for sustainable farming systems. The major recommendations are to: (i) implement the framework for resources (land, water and vegetation, economic, community and institution) benchmarking and monitoring, and integrate this process with current activities so that awareness, implementation and evolution of sustainable resource management practices become normal practice in the grains industry; (ii) empower the grains industry to take the lead by using relevant sustainability indicators to benchmark and monitor resources; (iii) adopt a collaborative approach by involving various industry, community, catchment management and government agency groups to minimise implementation time. Monitoring programs such as Waterwatch, Soilcheck, Grasscheck and Topcrop should be utilised; (iv) encourage the adoption of a decision making system by growers and industry representatives as a participatory decision and evaluation process. Widespread use of sustainability indicators would assist in validating and refining these indicators and evaluating sustainable farming systems. The indicators could also assist in evaluating best management practices for the grains industry.