928 resultados para python django bootstrap
Resumo:
Se trata de habilitar las mejores prácticas de enseñanza a través de las tecnologías de la información para posibilitar un aprendizaje más efectivo para los alumnos, así como proporcionar un sencillo y flexible acceso a todas las potencialidades para los profesores, al mismo tiempo que se maximiza la eficiencia y adecuación de su implementación usando las tecnologías de la información. Se define una teoría de modelado del e-learning, que por un lado aporte una visión global de modelado del e-learning; y por otro, modele de forma completa desde diferentes puntos de vista aspectos para los que se detectan carencias Otro objetivo es crear un nuevo sistema de pistas adaptativo en tutoría inteligente utilizando técnicas de Web semántica, donde se apliquen varios de los diferentes aspectos de la teoría. En esta tesis se proporciona una teoría de modelado del e-learning, que incluye una visión global sobre qué modelar y cómo hacerlo, las interrelaciones entre diferentes conceptos y elementos, una visión ideal sobre el e-learning, una propuesta de proceso de desarrollo en ciclo de vida, y un plan general de evaluación de los diferentes aspectos involucrados. Además, como parte de esta teoría, se han analizado las relaciones entre las funcionalidades de sistemas de gestión del aprendizaje y los estándares de e-learning actuales, se ha definido un nuevo modelo que extiende UML y otro basado en la especificación IMS-CP (Content Packaging) para el modelado de cursos completos en sistemas de gestión del aprendizaje, se ha contribuido en varias herramientas de autor que pueden verse como modelados en lenguaje natural de diferentes aspectos del e-learning de forma que sean sencillos de utilizar por profesores sin grandes conocimientos tecnológicos, y se ha creado una nueva teoría de reglas de adaptación personalizadas que son atómicas, reusables, intercambiables, e interoperables. Se ha definido una nueva especificación de pistas para el aprendizaje basado en problemas, que recopila funcionalidades de otros sistemas del estado del arte, pero también incluye nuevas funcionalidades basadas en ideas propias, dando una justificación pedagógica de cada aspecto. Se ha establecido un mapeo a XML, y otra representación a UML. Así mismo, se ha diseñado una herramienta de autor que permite a profesores sin grandes conocimientos tecnológicos crear los ejercicios con pistas de acuerdo con la especificación. Para poner en práctica este modelo de pistas, se ha implementado un módulo reproductor de pistas programado en python como una extesión al tutor inteligente XTutor. Este reproductor permite desplegar ejercicios con pistas que cubren los casos de la nueva especificación definida y que quedan disponibles vía Web para su uso por parte de los alumnos. También se ha diseñado una herramienta de competición innovadora para aprovechar la motivación junto con el aprendizaje basado en problemas.
Resumo:
Resumen basado en el de la publicación
Resumo:
Resumen tomado de la publicaci??n
Resumo:
Resumen basado en el de la publicaci??n
Resumo:
SEXTANTE es un marco para el desarrollo de algoritmos dedicados al procesamiento de información geográficamente referenciada, que actualmente cuenta con más de doscientos algoritmos que son capaces de operar sobre datos vectoriales, alfanuméricos y raster. Por otra parte, GearScape es un sistema de información geográfico orientado al geoprocesamiento, que dispone de un lenguaje declarativo que permite el desarrollo de geoprocesos sin necesidad de herramientas de desarrollo complejas. Dicho lenguaje está basado en el estándar SQL y extendido mediante la norma OGC para el acceso a fenómenos simples. Al ser un lenguaje mucho más simple que los lenguajes de programación imperativos (java, .net, python, etc.) la creación de geoprocesos es también más simple, más fácil de documentar, menos propensa a bugs y además la ejecución es optimizada de manera automática mediante el uso de índices y otras técnicas. La posibilidad de describir cadenas de operaciones complejas tiene también valor a modo de documentación: es posible escribir todos los pasos para la resolución de un determinado problema y poder recuperarlo tiempo después, reutilizarlo fácilmente, comunicárselo a otra persona, etc. En definitiva, el lenguaje de geoprocesamiento de GearScape permite "hablar" de geoprocesos. La integración de SEXTANTE en GearScape tiene un doble objetivo. Por una parte se pretende proporcionar la posibilidad de usar cualquiera de los algoritmos con la interfaz habitual de SEXTANTE. Por la otra, se pretende añadir al lenguaje de geoprocesamiento de GearScape la posibilidad de utilizar algoritmos de SEXTANTE. De esta manera, cualquier problema que se resuelva mediante la utilización de varios de estos algoritmes puede ser descrito con el lenguaje de geoprocesamiento de GearScape. A las ventajas del lenguaje de GearScape para la definición de geoprocesos, se añade el abanico de geoprocesos disponible en SEXTANTE, por lo que el lenguaje de geoprocesamiento de GearScape nos permite "hablar" utilizando vocabulario de SEXTANTE
Resumo:
This note considers the variance estimation for population size estimators based on capture–recapture experiments. Whereas a diversity of estimators of the population size has been suggested, the question of estimating the associated variances is less frequently addressed. This note points out that the technique of conditioning can be applied here successfully which also allows us to identify sources of variation: the variance due to estimation of the model parameters and the binomial variance due to sampling n units from a population of size N. It is applied to estimators typically used in capture–recapture experiments in continuous time including the estimators of Zelterman and Chao and improves upon previously used variance estimators. In addition, knowledge of the variances associated with the estimators by Zelterman and Chao allows the suggestion of a new estimator as the weighted sum of the two. The decomposition of the variance into the two sources allows also a new understanding of how resampling techniques like the Bootstrap could be used appropriately. Finally, the sample size question for capture–recapture experiments is addressed. Since the variance of population size estimators increases with the sample size, it is suggested to use relative measures such as the observed-to-hidden ratio or the completeness of identification proportion for approaching the question of sample size choice.
Resumo:
This article assesses the extent to which sampling variation affects findings about Malmquist productivity change derived using data envelopment analysis (DEA), in the first stage by calculating productivity indices and in the second stage by investigating the farm-specific change in productivity. Confidence intervals for Malmquist indices are constructed using Simar and Wilson's (1999) bootstrapping procedure. The main contribution of this article is to account in the second stage for the information in the second stage provided by the first-stage bootstrap. The DEA SEs of the Malmquist indices given by bootstrapping are employed in an innovative heteroscedastic panel regression, using a maximum likelihood procedure. The application is to a sample of 250 Polish farms over the period 1996 to 2000. The confidence intervals' results suggest that the second half of 1990s for Polish farms was characterized not so much by productivity regress but rather by stagnation. As for the determinants of farm productivity change, we find that the integration of the DEA SEs in the second-stage regression is significant in explaining a proportion of the variance in the error term. Although our heteroscedastic regression results differ with those from the standard OLS, in terms of significance and sign, they are consistent with theory and previous research.
Resumo:
This article explores how data envelopment analysis (DEA), along with a smoothed bootstrap method, can be used in applied analysis to obtain more reliable efficiency rankings for farms. The main focus is the smoothed homogeneous bootstrap procedure introduced by Simar and Wilson (1998) to implement statistical inference for the original efficiency point estimates. Two main model specifications, constant and variable returns to scale, are investigated along with various choices regarding data aggregation. The coefficient of separation (CoS), a statistic that indicates the degree of statistical differentiation within the sample, is used to demonstrate the findings. The CoS suggests a substantive dependency of the results on the methodology and assumptions employed. Accordingly, some observations are made on how to conduct DEA in order to get more reliable efficiency rankings, depending on the purpose for which they are to be used. In addition, attention is drawn to the ability of the SLICE MODEL, implemented in GAMS, to enable researchers to overcome the computational burdens of conducting DEA (with bootstrapping).
Resumo:
This article illustrates the usefulness of applying bootstrap procedures to total factor productivity Malmquist indices, derived with data envelopment analysis (DEA), for a sample of 250 Polish farms during 1996-2000. The confidence intervals constructed as in Simar and Wilson suggest that the common portrayal of productivity decline in Polish agriculture may be misleading. However, a cluster analysis based on bootstrap confidence intervals reveals that important policy conclusions can be drawn regarding productivity enhancement.
Resumo:
Two models for predicting Septoria tritici on winter wheat (cv. Ri-band) were developed using a program based on an iterative search of correlations between disease severity and weather. Data from four consecutive cropping seasons (1993/94 until 1996/97) at nine sites throughout England were used. A qualitative model predicted the presence or absence of Septoria tritici (at a 5% severity threshold within the top three leaf layers) using winter temperature (January/February) and wind speed to about the first node detectable growth stage. For sites above the disease threshold, a quantitative model predicted severity of Septoria tritici using rainfall during stern elongation. A test statistic was derived to test the validity of the iterative search used to obtain both models. This statistic was used in combination with bootstrap analyses in which the search program was rerun using weather data from previous years, therefore uncorrelated with the disease data, to investigate how likely correlations such as the ones found in our models would have been in the absence of genuine relationships.
Resumo:
We seek to address formally the question raised by Gardner (2003) in his Elmhirst lecture as to the direction of causality between agricultural value added per worker and Gross Domestic Product (GDP) per capita. Using the Granger causality test in the panel data analyzed by Gardner for 85 countries, we find overwhelming evidence that supports the conclusion that agricultural value added is the causal variable in developing countries, while the direction of causality in developed countries is unclear. We also examine further the use of the Granger causality test in integrated data and provide evidence that the performance of the test can be increased in small samples through the use of the bootstrap.
Resumo:
This note considers the variance estimation for population size estimators based on capture–recapture experiments. Whereas a diversity of estimators of the population size has been suggested, the question of estimating the associated variances is less frequently addressed. This note points out that the technique of conditioning can be applied here successfully which also allows us to identify sources of variation: the variance due to estimation of the model parameters and the binomial variance due to sampling n units from a population of size N. It is applied to estimators typically used in capture–recapture experiments in continuous time including the estimators of Zelterman and Chao and improves upon previously used variance estimators. In addition, knowledge of the variances associated with the estimators by Zelterman and Chao allows the suggestion of a new estimator as the weighted sum of the two. The decomposition of the variance into the two sources allows also a new understanding of how resampling techniques like the Bootstrap could be used appropriately. Finally, the sample size question for capture–recapture experiments is addressed. Since the variance of population size estimators increases with the sample size, it is suggested to use relative measures such as the observed-to-hidden ratio or the completeness of identification proportion for approaching the question of sample size choice.
Resumo:
Objectives: This study reports the cost-effectiveness of a preventive intervention, consisting of counseling and specific support for the mother-infant relationship, targeted at women at high risk of developing postnatal depression. Methods: A prospective economic evaluation was conducted alongside a pragmatic randomized controlled trial in which women considered at high risk of developing postnatal depression were allocated randomly to the preventive intervention (n = 74) or to routine primary care (n = 77). The primary outcome measure was the duration of postnatal depression experienced during the first 18 months postpartum. Data on health and social care use by women and their infants up to 18 months postpartum were collected, using a combination of prospective diaries and face-to-face interviews, and then were combined with unit costs ( pound, year 2000 prices) to obtain a net cost per mother-infant dyad. The nonparametric bootstrap method was used to present cost-effectiveness acceptability curves and net benefit statistics at alternative willingness to pay thresholds held by decision makers for preventing 1 month of postnatal depression. Results: Women in the preventive intervention group were depressed for an average of 2.21 months (9.57 weeks) during the study period, whereas women in the routine primary care group were depressed for an average of 2.70 months (11.71 weeks). The mean health and social care costs were estimated at 2,396.9 pound per mother-infant dyad in the preventive intervention group and 2,277.5 pound per mother-infant dyad in the routine primary care group, providing a mean cost difference of 119.5 pound (bootstrap 95 percent confidence interval [Cl], -535.4, 784.9). At a willingness to pay threshold of 1,000 pound per month of postnatal depression avoided, the probability that the preventive intervention is cost-effective is .71 and the mean net benefit is 383.4 pound (bootstrap 95 percent Cl, -863.3- pound 1,581.5) pound. Conclusions: The preventive intervention is likely to be cost-effective even at relatively low willingness to pay thresholds for preventing 1 month of postnatal depression during the first 18 months postpartum. Given the negative impact of postnatal depression on later child development, further research is required that investigates the longer-term cost-effectiveness of the preventive intervention in high risk women.
Resumo:
The 3D reconstruction of a Golgi-stained dendritic tree from a serial stack of images captured with a transmitted light bright-field microscope is investigated. Modifications to the bootstrap filter are discussed such that the tree structure may be estimated recursively as a series of connected segments. The tracking performance of the bootstrap particle filter is compared against Differential Evolution, an evolutionary global optimisation method, both in terms of robustness and accuracy. It is found that the particle filtering approach is significantly more robust and accurate for the data considered.