993 resultados para Wooden-frame houses Queensland Testing
Resumo:
This study aims to identify the constraints that complicate the assessment of speaking in Capeverdean EFL classrooms. A literature review was conducted on studies already done in the field of testing speaking in EFL Capeverdean classrooms. The study was carried out using qualitative method with some Capeverdean secondary school English teachers. The participants answered a questionnaire that asked teachers opinions and experiences about speaking assessment. The study found that Capeverdean English teachers do not adequately assess their students speaking ability. Therefore, the study pointed out the constraints of the Capeverdean context that complicate the assessment of speaking in EFL classrooms. The teachers reported the main constraints in order of significance as large classes, difficulty in marking oral tests, difficulty in designing oral tests and difficulty in separating the speaking skill from the listening skill. It concluded that Capeverdean English teachers need assistance with new tools to assess speaking in their classrooms. Thus, the author will make some suggestions, first to the Ministry of Education and then to English teachers in the field to assist them with the implementation of regular oral testing in Cape Verdean English classrooms.
Resumo:
BACKGROUND: Replicative phenotypic HIV resistance testing (rPRT) uses recombinant infectious virus to measure viral replication in the presence of antiretroviral drugs. Due to its high sensitivity of detection of viral minorities and its dissecting power for complex viral resistance patterns and mixed virus populations rPRT might help to improve HIV resistance diagnostics, particularly for patients with multiple drug failures. The aim was to investigate whether the addition of rPRT to genotypic resistance testing (GRT) compared to GRT alone is beneficial for obtaining a virological response in heavily pre-treated HIV-infected patients. METHODS: Patients with resistance tests between 2002 and 2006 were followed within the Swiss HIV Cohort Study (SHCS). We assessed patients' virological success after their antiretroviral therapy was switched following resistance testing. Multilevel logistic regression models with SHCS centre as a random effect were used to investigate the association between the type of resistance test and virological response (HIV-1 RNA <50 copies/mL or ≥1.5 log reduction). RESULTS: Of 1158 individuals with resistance tests 221 with GRT+rPRT and 937 with GRT were eligible for analysis. Overall virological response rates were 85.1% for GRT+rPRT and 81.4% for GRT. In the subgroup of patients with >2 previous failures, the odds ratio (OR) for virological response of GRT+rPRT compared to GRT was 1.45 (95% CI 1.00-2.09). Multivariate analyses indicate a significant improvement with GRT+rPRT compared to GRT alone (OR 1.68, 95% CI 1.31-2.15). CONCLUSIONS: In heavily pre-treated patients rPRT-based resistance information adds benefit, contributing to a higher rate of treatment success.
Resumo:
This paper discusses the role of deterministic components in the DGP and in the auxiliaryregression model which underlies the implementation of the Fractional Dickey-Fuller (FDF) test for I(1) against I(d) processes with d [0, 1). This is an important test in many economic applications because I(d) processess with d < 1 are mean-reverting although, when 0.5 = d < 1, like I(1) processes, they are nonstationary. We show how simple is the implementation of the FDF in these situations, and argue that it has better properties than LM tests. A simple testing strategy entailing only asymptotically normally distributedtests is also proposed. Finally, an empirical application is provided where the FDF test allowing for deterministic components is used to test for long-memory in the per capita GDP of several OECD countries, an issue that has important consequences to discriminate between growth theories, and on which there is some controversy.
Resumo:
Small sample properties are of fundamental interest when only limited data is avail-able. Exact inference is limited by constraints imposed by speci.c nonrandomizedtests and of course also by lack of more data. These e¤ects can be separated as we propose to evaluate a test by comparing its type II error to the minimal type II error among all tests for the given sample. Game theory is used to establish this minimal type II error, the associated randomized test is characterized as part of a Nash equilibrium of a .ctitious game against nature.We use this method to investigate sequential tests for the di¤erence between twomeans when outcomes are constrained to belong to a given bounded set. Tests ofinequality and of noninferiority are included. We .nd that inference in terms oftype II error based on a balanced sample cannot be improved by sequential sampling or even by observing counter factual evidence providing there is a reasonable gap between the hypotheses.
Resumo:
This study describes a task that combines random searching with goal directed navigation. The testing was conducted on a circular elevated open field (80 cm in diameter), with an unmarked target area (20 cm in diameter) in the center of 1 of the 4 quadrants. Whenever the rat entered the target area, the computerized tracking system released a pellet to a random point on the open field. Rats were able to learn the task under light and in total darkness, and on a stable or a rotating arena. Visual information was important in light, but idiothetic information became crucial in darkness. Learning of a new position was quicker under light than in total darkness on a rotating arena. The place preference task should make it possible to study place cells (PCs) when the rats use an allothetic (room frame) or idiothetic (arena frame) representation of space and to compare the behavioral response with the PCs' activity.
Resumo:
The paper proposes a technique to jointly test for groupings of unknown size in the cross sectional dimension of a panel and estimates the parameters of each group, and applies it to identifying convergence clubs in income per-capita. The approach uses the predictive density of the data, conditional on the parameters of the model. The steady state distribution of European regional data clusters around four poles of attraction with different economic features. The distribution of incomeper-capita of OECD countries has two poles of attraction and each grouphas clearly identifiable economic characteristics.
Resumo:
Due to practical difficulties in obtaining direct genetic estimates of effective sizes, conservation biologists have to rely on so-called 'demographic models' which combine life-history and mating-system parameters with F-statistics in order to produce indirect estimates of effective sizes. However, for the same practical reasons that prevent direct genetic estimates, the accuracy of demographic models is difficult to evaluate. Here we use individual-based, genetically explicit computer simulations in order to investigate the accuracy of two such demographic models aimed at investigating the hierarchical structure of populations. We show that, by and large, these models provide good estimates under a wide range of mating systems and dispersal patterns. However, one of the models should be avoided whenever the focal species' breeding system approaches monogamy with no sex bias in dispersal or when a substructure within social groups is suspected because effective sizes may then be strongly overestimated. The timing during the life cycle at which F-statistics are evaluated is also of crucial importance and attention should be paid to it when designing field sampling since different demographic models assume different timings. Our study shows that individual-based, genetically explicit models provide a promising way of evaluating the accuracy of demographic models of effective size and delineate their field of applicability.
Resumo:
Expected utility theory (EUT) has been challenged as a descriptive theoryin many contexts. The medical decision analysis context is not an exception.Several researchers have suggested that rank dependent utility theory (RDUT)may accurately describe how people evaluate alternative medical treatments.Recent research in this domain has addressed a relevant feature of RDU models-probability weighting-but to date no direct test of this theoryhas been made. This paper provides a test of the main axiomatic differencebetween EUT and RDUT when health profiles are used as outcomes of riskytreatments. Overall, EU best described the data. However, evidence on theediting and cancellation operation hypothesized in Prospect Theory andCumulative Prospect Theory was apparent in our study. we found that RDUoutperformed EU in the presentation of the risky treatment pairs in whichthe common outcome was not obvious. The influence of framing effects onthe performance of RDU and their importance as a topic for future researchis discussed.
Resumo:
Consider the problem of testing k hypotheses simultaneously. In this paper,we discuss finite and large sample theory of stepdown methods that providecontrol of the familywise error rate (FWE). In order to improve upon theBonferroni method or Holm's (1979) stepdown method, Westfall and Young(1993) make eective use of resampling to construct stepdown methods thatimplicitly estimate the dependence structure of the test statistics. However,their methods depend on an assumption called subset pivotality. The goalof this paper is to construct general stepdown methods that do not requiresuch an assumption. In order to accomplish this, we take a close look atwhat makes stepdown procedures work, and a key component is a monotonicityrequirement of critical values. By imposing such monotonicity on estimatedcritical values (which is not an assumption on the model but an assumptionon the method), it is demonstrated that the problem of constructing a validmultiple test procedure which controls the FWE can be reduced to the problemof contructing a single test which controls the usual probability of a Type 1error. This reduction allows us to draw upon an enormous resamplingliterature as a general means of test contruction.
Resumo:
We consider a dynamic multifactor model of investment with financing imperfections,adjustment costs and fixed and variable capital. We use the model to derive a test offinancing constraints based on a reduced form variable capital equation. Simulation resultsshow that this test correctly identifies financially constrained firms even when the estimationof firms investment opportunities is very noisy. In addition, the test is well specified inthe presence of both concave and convex adjustment costs of fixed capital. We confirmempirically the validity of this test on a sample of small Italian manufacturing companies.
Resumo:
This paper extends previous resuls on optimal insurance trading in the presence of a stock market that allows continuous asset trading and substantial personal heterogeneity, and applies those results in a context of asymmetric informationwith references to the role of genetic testing in insurance markets.We find a novel and surprising result under symmetric information:agents may optimally prefer to purchase full insurance despitethe presence of unfairly priced insurance contracts, and other assets which are correlated with insurance.Asymmetric information has a Hirschleifer-type effect whichcan be solved by suspending insurance trading. Nevertheless,agents can attain their first best allocations, which suggeststhat the practice of restricting insurance not to be contingenton genetic tests can be efficient.
Coverage and nonresponse errors in an individual register frame-based Swiss telephone election study
Resumo:
This paper illustrates the philosophy which forms the basis of calibrationexercises in general equilibrium macroeconomic models and the details of theprocedure, the advantages and the disadvantages of the approach, with particularreference to the issue of testing ``false'' economic models. We provide anoverview of the most recent simulation--based approaches to the testing problemand compare them to standard econometric methods used to test the fit of non--lineardynamic general equilibrium models. We illustrate how simulation--based techniques can be used to formally evaluate the fit of a calibrated modelto the data and obtain ideas on how to improve the model design using a standardproblem in the international real business cycle literature, i.e. whether amodel with complete financial markets and no restrictions to capital mobility is able to reproduce the second order properties of aggregate savingand aggregate investment in an open economy.
Resumo:
Studies assessing skin irritation to chemicals have traditionally used laboratory animals; however, such methods are questionable regarding their relevance for humans. New in vitro methods have been validated, such as the reconstructed human epidermis (RHE) model (Episkin®, Epiderm®). The comparison (accuracy) with in vivo results such as the 4-h human patch test (HPT) is 76% at best (Epiderm®). There is a need to develop an in vitro method that better simulates the anatomo-pathological changes encountered in vivo. To develop an in vitro method to determine skin irritation using human viable skin through histopathology, and compare the results of 4 tested substances to the main in vitro methods and in vivo animal method (Draize test). Human skin removed during surgery was dermatomed and mounted on an in vitro flow-through diffusion cell system. Ten chemicals with known non-irritant (heptylbutyrate, hexylsalicylate, butylmethacrylate, isoproturon, bentazon, DEHP and methylisothiazolinone (MI)) and irritant properties (folpet, 1-bromohexane and methylchloroisothiazolinone (MCI/MI)), a negative control (sodiumchloride) and a positive control (sodiumlaurylsulphate) were applied. The skin was exposed at least for 4h. Histopathology was performed to investigate irritation signs (spongiosis, necrosis, vacuolization). We obtained 100% accuracy with the HPT model; 75% with the RHE models and 50% with the Draize test for 4 tested substances. The coefficients of variation (CV) between our three test batches were <0.1, showing good reproducibility. Furthermore, we reported objectively histopathological irritation signs (irritation scale): strong (folpet), significant (1-bromohexane), slight (MCI/MI at 750/250ppm) and none (isoproturon, bentazon, DEHP and MI). This new in vitro test method presented effective results for the tested chemicals. It should be further validated using a greater number of substances; and tested in different laboratories in order to suitably evaluate reproducibility.
Resumo:
Genetic polymorphisms have currently been described in more than 200 systems affecting pharmacological responses (cytochromes P450, conjugation enzymes, transporters, receptors, effectors of response, protection mechanisms, determinants of immunity). Pharmacogenetic testing, i.e. the profiling of individual patients for such variations, is about to become largely available. Recent progress in the pharmacogenetics of tamoxifen, oral anticoagulants and anti-HIV agents is reviewed to discuss critically their potential impact on prescription and contribution/limits for improving rational and safe use of pharmaceuticals. Prospective controlled trials are required to evaluate large-scale pharmacogenetic testing in therapeutics. Ethical, social and psychological issues deserve particular attention.