922 resultados para Efficient error correction
Resumo:
This paper analyses the correction of errors and mistakes made by students in the Foreign Language Teaching classroom. Its goal is to point out typical correction behaviors in Cape Verde in Language Teaching classrooms and raise teachers’ consciousness concerning better correction practice.
Resumo:
During one week, beginning 18 days after transplantation, nude mice bearing human colon carcinoma ranging from 115 to 943 mm3 (mean 335 mm3) were treated by repeated intravenous injections of either iodine-131-(131I) labeled intact antibodies or 131I-labeled corresponding F(ab')2 fragments of a pool of four monoclonal antibodies (MAbs) directed against distinct epitopes of carcinoembryonic antigen (CEA). Complete tumor remission was observed in 8 of 10 mice after therapy with F(ab')2 and 6 of the animals survived 10 mo in good health. In contrast, after treatment with intact MAbs, tumors relapsed in 7 of 8 mice after remission periods of 1 to 3.5 mo despite the fact that body weight loss and depression of peripheral white blood cells, symptoms of radiation toxicity, and the calculated radiation doses for liver, spleen, bone, and blood were increased or equal in these animals as compared to mice treated with F(ab')2.
Resumo:
The state of Vaud model of the pre-hospital chain of survival is an example of an efficient way to deal with pre-hospital emergencies. It revolves around a centrally located dispatch center managing emergencies according to specific key words, allowing dispatchers to send out resources among which we find general practitioners, ambulances, physician staffed fast response cars or physician staffed helicopters and specific equipment. The Vaud pre-hospital chain of survival has been tailored according to geographical, demographical and political necessities. It undergoes constant reassessment and needs continuous adaptations to the ever changing demographics and epidemiology of pre-hospital medicine.
Resumo:
Intratumoural (i.t.) injection of radio-iododeoxyuridine (IdUrd), a thymidine (dThd) analogue, is envisaged for targeted Auger electron- or beta-radiation therapy of glioblastoma. Here, biodistribution of [(125)I]IdUrd was evaluated 5 hr after i.t. injection in subcutaneous human glioblastoma xenografts LN229 after different intravenous (i.v.) pretreatments with fluorodeoxyuridine (FdUrd). FdUrd is known to block de novo dThd synthesis, thus favouring DNA incorporation of radio-IdUrd. Results showed that pretreatment with 2 mg/kg FdUrd i.v. in 2 fractions 0.5 hr and 1 hr before injection of radio-IdUrd resulted in a mean tumour uptake of 19.8% of injected dose (% ID), representing 65.3% ID/g for tumours of approx. 0.35 g. Tumour uptake of radio-IdUrd in non-pretreated mice was only 4.1% ID. Very low uptake was observed in normal nondividing and dividing tissues with a maximum concentration of 2.9% ID/g measured in spleen. Pretreatment with a higher dose of FdUrd of 10 mg/kg prolonged the increased tumour uptake of radio-IdUrd up to 5 hr. A competition experiment was performed in FdUrd pretreated mice using i.t. co-injection of excess dThd that resulted in very low tumour retention of [(125)I]IdUrd. DNA isolation experiments showed that in the mean >95% of tumour (125)I activity was incorporated in DNA. In conclusion, these results show that close to 20% ID of radio-IdUrd injected i.t. was incorporated in tumour DNA after i.v. pretreatment with clinically relevant doses of FdUrd and that this approach may be further exploited for diffusion and therapy studies with Auger electron- and/or beta-radiation-emitting radio-IdUrd.
Resumo:
O trabalho de investigação que ora se apresenta entronca nas preocupações do dia a dia enquanto profissional e dirigente da educação que tem experimentado e acompanhado de perto a prática pedagógica e docente. Por esses imperativos, senti-me na obrigação de procurar uma resposta, refletir sobre as dificuldades do ensino da língua portuguesa e perceber melhor esses obstáculos, nomeadamente o erro linguístico nas práticas letivas, as causas subjacentes e, eventualmente, a quota-parte de responsabilidades dos outros intervenientes no processo, nomeadamente, dos professores de língua portuguesa e do próprio sistema. Nesta sequência, o presente estudo aborda o erro como conceito, marcado pela polissemia da sua definição, abordado pelas múltiplas metodologias de ensino, mas também como elemento central no ensino e aprendizagem de uma língua segunda, no ensino básico, em contextos de coabitação de línguas muito próximas como o português e a língua cabo-verdiana; pretendemos também elencar os procedimentos e atitudes dos atores no processo, bem como os meios didático-pedagógicos essenciais com vista a sua deteção, análise e tratamento do mesmo. A aprendizagem de uma língua segunda como o português, num contexto como o de Cabo Verde, constitui uma tarefa complexa e por vezes demorada, que não pode ser resumida a atos corriqueiros e previsíveis de sala de aula, ignorando as necessidades, disposições e interesses dos aprendentes que são colocados perante uma encruzilhada, o de aprender uma língua que não é sua, mas que não pode recusar. A aparente aproximação entre as duas línguas constitui um obstáculo acrescido, por propiciar a interferência, principal causa do erro, apesar do avanço verificado no desenvolvimento de metodologias e materiais de apoio que auxiliam e tornam mais eficiente o processo de aquisição de uma língua segunda. Para operacionalização do assunto foi elaborado um estudo com recurso à análise de erros, em quarenta e um (41) textos produzidos por alunos do 6.º ano de escolaridade de cinco escolas do ensino básico do Tarrafal, Cabo Verde, com o intuito de recolher informações, analisá-las e, após uma reflexão sobre os resultados, concluir sobre as suas implicações no ensino aprendizagem da língua portuguesa.
Resumo:
The n-octanol/water partition coefficient (log Po/w) is a key physicochemical parameter for drug discovery, design, and development. Here, we present a physics-based approach that shows a strong linear correlation between the computed solvation free energy in implicit solvents and the experimental log Po/w on a cleansed data set of more than 17,500 molecules. After internal validation by five-fold cross-validation and data randomization, the predictive power of the most interesting multiple linear model, based on two GB/SA parameters solely, was tested on two different external sets of molecules. On the Martel druglike test set, the predictive power of the best model (N = 706, r = 0.64, MAE = 1.18, and RMSE = 1.40) is similar to six well-established empirical methods. On the 17-drug test set, our model outperformed all compared empirical methodologies (N = 17, r = 0.94, MAE = 0.38, and RMSE = 0.52). The physical basis of our original GB/SA approach together with its predictive capacity, computational efficiency (1 to 2 s per molecule), and tridimensional molecular graphics capability lay the foundations for a promising predictor, the implicit log P method (iLOGP), to complement the portfolio of drug design tools developed and provided by the SIB Swiss Institute of Bioinformatics.
Resumo:
We study a retail benchmarking approach to determine access prices for interconnected networks. Instead of considering fixed access charges as in the existing literature, we study access pricing rules that determine the access price that network i pays to network j as a linear function of the marginal costs and the retail prices set by both networks. In the case of competition in linear prices, we show that there is a unique linear rule that implements the Ramsey outcome as the unique equilibrium, independently of the underlying demand conditions. In the case of competition in two-part tariffs, we consider a class of access pricing rules, similar to the optimal one under linear prices but based on average retail prices. We show that firms choose the variable price equal to the marginal cost under this class of rules. Therefore, the regulator (or the competition authority) can choose one among the rules to pursue additional objectives such as consumer surplus, network coverage or investment: for instance, we show that both static and dynamic e±ciency can be achieved at the same time.
Resumo:
This article investigates the main sources of heterogeneity in regional efficiency. We estimate a translog stochastic frontier production function in the analysis of Spanish regions in the period 1964-1996, to attempt to measure and explain changes in technical efficiency. Our results confirm that regional inefficiency is significantly and positively correlated with the ratio of public capital to private capital. The proportion of service industries in the private capital, the proportion of public capital devoted to transport infrastructures, the industrial specialization, and spatial spillovers from transport infrastructures in neighbouring regions significantly contributed to improve regional efficiency.
Resumo:
This paper extends existing insurance results on the type of insurance contracts needed for insurance market efficiency toa dynamic setting. It introduces continuosly open markets that allow for more efficient asset allocation. It alsoeliminates the role of preferences and endowments in the classification of risks, which is done primarily in terms of the actuarial properties of the underlying riskprocess. The paper further extends insurability to include correlated and catstrophic events. Under these very general conditions the paper defines a condition that determines whether a small number of standard insurance contracts (together with aggregate assets) suffice to complete markets or one needs to introduce such assets as mutual insurance.
Resumo:
We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical {\sc vc} dimension, empirical {\sc vc} entropy, andmargin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.
Resumo:
Summary points: - The bias introduced by random measurement error will be different depending on whether the error is in an exposure variable (risk factor) or outcome variable (disease) - Random measurement error in an exposure variable will bias the estimates of regression slope coefficients towards the null - Random measurement error in an outcome variable will instead increase the standard error of the estimates and widen the corresponding confidence intervals, making results less likely to be statistically significant - Increasing sample size will help minimise the impact of measurement error in an outcome variable but will only make estimates more precisely wrong when the error is in an exposure variable
Resumo:
Remote sensing spatial, spectral, and temporal resolutions of images, acquired over a reasonably sized image extent, result in imagery that can be processed to represent land cover over large areas with an amount of spatial detail that is very attractive for monitoring, management, and scienti c activities. With Moore's Law alive and well, more and more parallelism is introduced into all computing platforms, at all levels of integration and programming to achieve higher performance and energy e ciency. Being the geometric calibration process one of the most time consuming processes when using remote sensing images, the aim of this work is to accelerate this process by taking advantage of new computing architectures and technologies, specially focusing in exploiting computation over shared memory multi-threading hardware. A parallel implementation of the most time consuming process in the remote sensing geometric correction has been implemented using OpenMP directives. This work compares the performance of the original serial binary versus the parallelized implementation, using several multi-threaded modern CPU architectures, discussing about the approach to nd the optimum hardware for a cost-e ective execution.
Resumo:
Detecting local differences between groups of connectomes is a great challenge in neuroimaging, because the large number of tests that have to be performed and the impact on multiplicity correction. Any available information should be exploited to increase the power of detecting true between-group effects. We present an adaptive strategy that exploits the data structure and the prior information concerning positive dependence between nodes and connections, without relying on strong assumptions. As a first step, we decompose the brain network, i.e., the connectome, into subnetworks and we apply a screening at the subnetwork level. The subnetworks are defined either according to prior knowledge or by applying a data driven algorithm. Given the results of the screening step, a filtering is performed to seek real differences at the node/connection level. The proposed strategy could be used to strongly control either the family-wise error rate or the false discovery rate. We show by means of different simulations the benefit of the proposed strategy, and we present a real application of comparing connectomes of preschool children and adolescents.