976 resultados para Reading--Ability testing.
Resumo:
This paper discusses the role of deterministic components in the DGP and in the auxiliaryregression model which underlies the implementation of the Fractional Dickey-Fuller (FDF) test for I(1) against I(d) processes with d [0, 1). This is an important test in many economic applications because I(d) processess with d < 1 are mean-reverting although, when 0.5 = d < 1, like I(1) processes, they are nonstationary. We show how simple is the implementation of the FDF in these situations, and argue that it has better properties than LM tests. A simple testing strategy entailing only asymptotically normally distributedtests is also proposed. Finally, an empirical application is provided where the FDF test allowing for deterministic components is used to test for long-memory in the per capita GDP of several OECD countries, an issue that has important consequences to discriminate between growth theories, and on which there is some controversy.
Resumo:
Small sample properties are of fundamental interest when only limited data is avail-able. Exact inference is limited by constraints imposed by speci.c nonrandomizedtests and of course also by lack of more data. These e¤ects can be separated as we propose to evaluate a test by comparing its type II error to the minimal type II error among all tests for the given sample. Game theory is used to establish this minimal type II error, the associated randomized test is characterized as part of a Nash equilibrium of a .ctitious game against nature.We use this method to investigate sequential tests for the di¤erence between twomeans when outcomes are constrained to belong to a given bounded set. Tests ofinequality and of noninferiority are included. We .nd that inference in terms oftype II error based on a balanced sample cannot be improved by sequential sampling or even by observing counter factual evidence providing there is a reasonable gap between the hypotheses.
Resumo:
Com a crescente competitividade no mundo empresarial, os preços dos produtos passaram a exercer um papel fundamental na expansão e sobrevivência das empresas. Consequentemente, hoje em dia é o mercado que determina o preço de venda de um produto, devendo a empresa produzir ao menor custo possível para garantir o retorno financeiro desejado. O objectivo do trabalho é verificar se os procedimentos de Target Costing podem ser aplicados nas Pequenas e Médias Empresas industriais, em São Vicente, cuja actividade também se destina à produção de produtos alimentícios. Como produto teste foi selecionado o produto A, confecçionado pela empresa Alvo, SA. Para atingirmos os objectivos foram utilizadas várias técnicas e métodos de pesquisa, tais como: levantamento bibliográfico, entrevista, conversas informais, questionário, levantamento de dados nos documentos financeiros da empresa ALVO, SA. Para entendermos e aplicarmos o processo de Target Costing recorreu-se à literatura do mesmo. Foi aplicado um questionário para ver a percepção dos clientes da empresa objecto de estudo, quanto ao preço que considerariam ideal pagar por cada quilograma do produto A adquirida. A entrevista realizada com o director geral da ALVO, SA acompanhada com os dados obtidos do departamento de contabilidade serviram como um meio de conhecer a empresa e o seu funcionamento, realçando informações sobre a fixação do preço venda dos seus produtos, a gestão de custos, entre outros. Antes de testar, através de um caso prático, a aplicação do Target Costing, verificou-se, primeiramente, sua aplicação em termos teóricos, testando os seus princípios e premissas para o produto A, na Empresa ALVO, SA. Como resultado constactou-se que os procedimentos de Target Costing podem ser aplicados, tanto na teoria como na prática, nas Pequenas e Médias Empresas industriais, em São Vicente, cuja actividade se destina a produção do produto A. With the increasing competition in the business world, the prices of products have come to play a key role in the expansion and survival of businesses. Consequently, today the selling price of a product is determined by the market so companies should produce at the lowest possible cost to ensure the desired financial return. The purpose of this paper work is to verify if the Target Costing’s procedure can be applied in small and medium business enterprises in São Vicente, whose activity is production of food. For that, product A was selected for tests. In order to achieve these objectives, some techniques and research methods like bibliographic analysis, interviews, informal conversations, questionnaires and analysis of the financial documents of ALVO, SA that is the subject of this case study, were utilized. With the intention of understanding and applying the Target Costing process we also resorted to a detailed reading of related bibliography. A questionnaire was applied in order to know the customers’ opinions about the ideal price for each kilogram of product A. The interview with the managing director of ALVO, SA, combined with the data obtained in the accounting department, was also used as a way to know the company, and it ’s functioning, highlight ing items such as: selling price format ion, cost management, among other aspects. Before testing, through a practical case study, the use of the Target Costing, the theoretical application was firstly verified by testing its principles and assumptions. Secondly, the application of the Target Costing’s process was shown step by step concerning product A at ALVO SA company. As a result we came to the conclusion that procedures used with Target Costing can be applied, in theory and in practice, in small or medium-sized industrial enterprises, in São Vicente, where product A is being manufactured.
Resumo:
The paper proposes a technique to jointly test for groupings of unknown size in the cross sectional dimension of a panel and estimates the parameters of each group, and applies it to identifying convergence clubs in income per-capita. The approach uses the predictive density of the data, conditional on the parameters of the model. The steady state distribution of European regional data clusters around four poles of attraction with different economic features. The distribution of incomeper-capita of OECD countries has two poles of attraction and each grouphas clearly identifiable economic characteristics.
Resumo:
Due to practical difficulties in obtaining direct genetic estimates of effective sizes, conservation biologists have to rely on so-called 'demographic models' which combine life-history and mating-system parameters with F-statistics in order to produce indirect estimates of effective sizes. However, for the same practical reasons that prevent direct genetic estimates, the accuracy of demographic models is difficult to evaluate. Here we use individual-based, genetically explicit computer simulations in order to investigate the accuracy of two such demographic models aimed at investigating the hierarchical structure of populations. We show that, by and large, these models provide good estimates under a wide range of mating systems and dispersal patterns. However, one of the models should be avoided whenever the focal species' breeding system approaches monogamy with no sex bias in dispersal or when a substructure within social groups is suspected because effective sizes may then be strongly overestimated. The timing during the life cycle at which F-statistics are evaluated is also of crucial importance and attention should be paid to it when designing field sampling since different demographic models assume different timings. Our study shows that individual-based, genetically explicit models provide a promising way of evaluating the accuracy of demographic models of effective size and delineate their field of applicability.
Resumo:
In Drosophila, courtship is an elaborate sequence of behavioural patterns that enables the flies to identify conspecific mates from those of closely related species. This is important because drosophilids usually gather in feeding sites, where males of various species court females vigorously. We investigated the effects of previous experience on D. mercatorum courtship, by testing if virgin males learn to improve their courtship by observing other flies (social learning), or by adjusting their pre-existent behaviour based on previous experiences (facilitation). Behaviours recorded in a controlled environment were courtship latency, courtship (orientation, tapping and wing vibration), mating and other behaviours not related to sexual activities. This study demonstrated that males of D. mercatorum were capable of improving their mating ability based on prior experiences, but they had no social learning on the development of courtship.
Resumo:
Expected utility theory (EUT) has been challenged as a descriptive theoryin many contexts. The medical decision analysis context is not an exception.Several researchers have suggested that rank dependent utility theory (RDUT)may accurately describe how people evaluate alternative medical treatments.Recent research in this domain has addressed a relevant feature of RDU models-probability weighting-but to date no direct test of this theoryhas been made. This paper provides a test of the main axiomatic differencebetween EUT and RDUT when health profiles are used as outcomes of riskytreatments. Overall, EU best described the data. However, evidence on theediting and cancellation operation hypothesized in Prospect Theory andCumulative Prospect Theory was apparent in our study. we found that RDUoutperformed EU in the presentation of the risky treatment pairs in whichthe common outcome was not obvious. The influence of framing effects onthe performance of RDU and their importance as a topic for future researchis discussed.
Resumo:
Consider the problem of testing k hypotheses simultaneously. In this paper,we discuss finite and large sample theory of stepdown methods that providecontrol of the familywise error rate (FWE). In order to improve upon theBonferroni method or Holm's (1979) stepdown method, Westfall and Young(1993) make eective use of resampling to construct stepdown methods thatimplicitly estimate the dependence structure of the test statistics. However,their methods depend on an assumption called subset pivotality. The goalof this paper is to construct general stepdown methods that do not requiresuch an assumption. In order to accomplish this, we take a close look atwhat makes stepdown procedures work, and a key component is a monotonicityrequirement of critical values. By imposing such monotonicity on estimatedcritical values (which is not an assumption on the model but an assumptionon the method), it is demonstrated that the problem of constructing a validmultiple test procedure which controls the FWE can be reduced to the problemof contructing a single test which controls the usual probability of a Type 1error. This reduction allows us to draw upon an enormous resamplingliterature as a general means of test contruction.
Resumo:
We consider a dynamic multifactor model of investment with financing imperfections,adjustment costs and fixed and variable capital. We use the model to derive a test offinancing constraints based on a reduced form variable capital equation. Simulation resultsshow that this test correctly identifies financially constrained firms even when the estimationof firms investment opportunities is very noisy. In addition, the test is well specified inthe presence of both concave and convex adjustment costs of fixed capital. We confirmempirically the validity of this test on a sample of small Italian manufacturing companies.
Resumo:
This paper analyzes the formation of Research Corporations as an alternative governance structure for performing R&D compared to pursuing in-house R&D projects. Research Corporations are privatefor-profit research centers that bring together several firms with similar research goals. In a Research Corporation formal authority over the choice of projects is jointly exercised by the top management of the member firms. A private for-profit organization cannot commit not to interfere with the project choice of the researchers. However, increasing the number of member firms of the Research Corporation reduces the incentive of member firms to meddle with the research projects of researchers because exercising formal authority over the choice of research projects is a public good. The Research Corporation thus offers researchers greater autonomy than a single firm pursuing an identical research program in its in-house R&D department. This attracts higher ability researchers to the Research Corporation compared to the internal R&D department. The paper uses the theoretical model to analyze the organization of the Microelectronics and Computer Technology Corporation (MCC). The facts of this case confirm the existence of a tension between control over the choice of research projects and the ability of researchers that the organization is able to attract or hold onto.
Resumo:
This paper extends previous resuls on optimal insurance trading in the presence of a stock market that allows continuous asset trading and substantial personal heterogeneity, and applies those results in a context of asymmetric informationwith references to the role of genetic testing in insurance markets.We find a novel and surprising result under symmetric information:agents may optimally prefer to purchase full insurance despitethe presence of unfairly priced insurance contracts, and other assets which are correlated with insurance.Asymmetric information has a Hirschleifer-type effect whichcan be solved by suspending insurance trading. Nevertheless,agents can attain their first best allocations, which suggeststhat the practice of restricting insurance not to be contingenton genetic tests can be efficient.
Resumo:
This paper illustrates the philosophy which forms the basis of calibrationexercises in general equilibrium macroeconomic models and the details of theprocedure, the advantages and the disadvantages of the approach, with particularreference to the issue of testing ``false'' economic models. We provide anoverview of the most recent simulation--based approaches to the testing problemand compare them to standard econometric methods used to test the fit of non--lineardynamic general equilibrium models. We illustrate how simulation--based techniques can be used to formally evaluate the fit of a calibrated modelto the data and obtain ideas on how to improve the model design using a standardproblem in the international real business cycle literature, i.e. whether amodel with complete financial markets and no restrictions to capital mobility is able to reproduce the second order properties of aggregate savingand aggregate investment in an open economy.
Resumo:
Studies assessing skin irritation to chemicals have traditionally used laboratory animals; however, such methods are questionable regarding their relevance for humans. New in vitro methods have been validated, such as the reconstructed human epidermis (RHE) model (Episkin®, Epiderm®). The comparison (accuracy) with in vivo results such as the 4-h human patch test (HPT) is 76% at best (Epiderm®). There is a need to develop an in vitro method that better simulates the anatomo-pathological changes encountered in vivo. To develop an in vitro method to determine skin irritation using human viable skin through histopathology, and compare the results of 4 tested substances to the main in vitro methods and in vivo animal method (Draize test). Human skin removed during surgery was dermatomed and mounted on an in vitro flow-through diffusion cell system. Ten chemicals with known non-irritant (heptylbutyrate, hexylsalicylate, butylmethacrylate, isoproturon, bentazon, DEHP and methylisothiazolinone (MI)) and irritant properties (folpet, 1-bromohexane and methylchloroisothiazolinone (MCI/MI)), a negative control (sodiumchloride) and a positive control (sodiumlaurylsulphate) were applied. The skin was exposed at least for 4h. Histopathology was performed to investigate irritation signs (spongiosis, necrosis, vacuolization). We obtained 100% accuracy with the HPT model; 75% with the RHE models and 50% with the Draize test for 4 tested substances. The coefficients of variation (CV) between our three test batches were <0.1, showing good reproducibility. Furthermore, we reported objectively histopathological irritation signs (irritation scale): strong (folpet), significant (1-bromohexane), slight (MCI/MI at 750/250ppm) and none (isoproturon, bentazon, DEHP and MI). This new in vitro test method presented effective results for the tested chemicals. It should be further validated using a greater number of substances; and tested in different laboratories in order to suitably evaluate reproducibility.
Resumo:
Genetic polymorphisms have currently been described in more than 200 systems affecting pharmacological responses (cytochromes P450, conjugation enzymes, transporters, receptors, effectors of response, protection mechanisms, determinants of immunity). Pharmacogenetic testing, i.e. the profiling of individual patients for such variations, is about to become largely available. Recent progress in the pharmacogenetics of tamoxifen, oral anticoagulants and anti-HIV agents is reviewed to discuss critically their potential impact on prescription and contribution/limits for improving rational and safe use of pharmaceuticals. Prospective controlled trials are required to evaluate large-scale pharmacogenetic testing in therapeutics. Ethical, social and psychological issues deserve particular attention.
Resumo:
The Treatise on Quadrature of Fermat (c. 1659), besides containing the first known proof of the computation of the area under a higher parabola, R x+m/n dx, or under a higher hyperbola, R x-m/n dx with the appropriate limits of integration in each case , has a second part which was not understood by Fermat s contemporaries. This second part of the Treatise is obscure and difficult to read and even the great Huygens described it as'published with many mistakes and it is so obscure (with proofs redolent of error) that I have been unable to make any sense of it'. Far from the confusion that Huygens attributes to it, in this paper we try to prove that Fermat, in writing the Treatise, had a very clear goal in mind and he managed to attain it by means of a simple and original method. Fermat reduced the quadrature of a great number of algebraic curves to the quadrature of known curves: the higher parabolas and hyperbolas of the first part of the paper. Others, he reduced to the quadrature of the circle. We shall see how the clever use of two procedures, quite novel at the time: the change of variables and a particular case of the formulaof integration by parts, provide Fermat with the necessary tools to square very easily curves as well-known as the folium of Descartes, the cissoid of Diocles or the witch of Agnesi.