965 resultados para Probabilistic choice models
Resumo:
The importance of extensive literature reading in the English as a Foreign Language (EFL) context has been given increasing attention in recent research. Literature reading is also a required part of the national syllabi of the (EFL) courses offered to both adolescents and adults at Upper Secondary level in Sweden. This thesis aims to investigate the teachers’ process of making literature choices for extensive reading in upper secondary EFL courses in Sweden. Eight teachers of three different student groups took part in the study, representing adolescent university preparatory programs and vocational programs, as well as programs for adult students. Questionnaires were used and the data was analyzed for patterns revealing three main factors affecting teachers’ literature choice: language proficiency, reading experience and contextual factors. These three factors were fitted into the theoretical framework of psycholinguistic and sociolinguistic reading models, with the addition of a perspective of motivational research. The results of this survey underline the importance of extensive reading, according to teachers, and that motivation for literature choice can be primarily related to factors associated with psycholinguistic reading models. The survey also points to the need for further investigating of teachers’ own experiences of literature reading, searching for deeper motivational factors which influence teaching choices. Another future field of research is the choice of reading activities assigned together with the chosen literature, which probably also influence teachers’ choices in the Swedish EFL classroom.
Resumo:
Due to the increase in water demand and hydropower energy, it is getting more important to operate hydraulic structures in an efficient manner while sustaining multiple demands. Especially, companies, governmental agencies, consultant offices require effective, practical integrated tools and decision support frameworks to operate reservoirs, cascades of run-of-river plants and related elements such as canals by merging hydrological and reservoir simulation/optimization models with various numerical weather predictions, radar and satellite data. The model performance is highly related with the streamflow forecast, related uncertainty and its consideration in the decision making. While deterministic weather predictions and its corresponding streamflow forecasts directly restrict the manager to single deterministic trajectories, probabilistic forecasts can be a key solution by including uncertainty in flow forecast scenarios for dam operation. The objective of this study is to compare deterministic and probabilistic streamflow forecasts on an earlier developed basin/reservoir model for short term reservoir management. The study is applied to the Yuvacık Reservoir and its upstream basin which is the main water supply of Kocaeli City located in the northwestern part of Turkey. The reservoir represents a typical example by its limited capacity, downstream channel restrictions and high snowmelt potential. Mesoscale Model 5 and Ensemble Prediction System data are used as a main input and the flow forecasts are done for 2012 year using HEC-HMS. Hydrometeorological rule-based reservoir simulation model is accomplished with HEC-ResSim and integrated with forecasts. Since EPS based hydrological model produce a large number of equal probable scenarios, it will indicate how uncertainty spreads in the future. Thus, it will provide risk ranges in terms of spillway discharges and reservoir level for operator when it is compared with deterministic approach. The framework is fully data driven, applicable, useful to the profession and the knowledge can be transferred to other similar reservoir systems.
Resumo:
This doctoral dissertation analyzes two novels by the American novelist Robert Coover as examples of hypertextual writing on the book bound page, as tokens of hyperfiction. The complexity displayed in the novels, John's Wife and The Adventures of Lucky Pierre, integrates the cultural elements that characterize the contemporary condition of capitalism and technologized practices that have fostered a different subjectivity evidenced in hypertextual writing and reading, the posthuman subjectivity. The models that account for the complexity of each novel are drawn from the concept of strange attractors in Chaos Theory and from the concept of rhizome in Nomadology. The transformations the characters undergo in the degree of their corporeality sets the plane on which to discuss turbulence and posthumanity. The notions of dynamic patterns and strange attractors, along with the concept of the Body without Organs and Rhizome are interpreted, leading to the revision of narratology and to analytical categories appropriate to the study of the novels. The reading exercised throughout this dissertation enacts Daniel Punday's corporeal reading. The changes in the characters' degree of materiality are associated with the stages of order, turbulence and chaos in the story, bearing on the constitution of subjectivity within and along the reading process. Coover's inscription of planes of consistency to counter linearity and accommodate hypertextual features to the paper supported narratives describes the characters' trajectory as rhizomatic. The study led to the conclusion that narrative today stands more as a regime in a rhizomatic relation with other regimes in cultural practice than as an exclusively literary form and genre. Besides this, posthuman subjectivity emerges as class identity, holding hypertextual novels as their literary form of choice.
Resumo:
Estimation of demand and supply in differentiated products markets is a central issue in Empirical Industrial Organization and has been used to study the effects of taxes, merges, introduction of new goods, market power, among others. Logit and Random Coefficients Logit are examples of demand models used to study these effects. For the supply side it is generally supposed a Nash equilibrium in prices. This work presents a detailed discussion of these models of demand and supply as well as the procedure for estimation. Lastly, is made an application to the Brazilian fixed income fund market.
Resumo:
Conservation laws in gravitational theories with diffeomorphism and local Lorentz symmetry are studied. Main attention is paid to the construction of conserved currents and charges associated with an arbitrary vector field that generates a diffeomorphism on the spacetime. We further generalize previous results for the case of gravitational models described by quasi-invariant Lagrangians, that is, Lagrangians that change by a total derivative under the action of the local Lorentz group. The general formalism is then applied to the teleparallel models, for which the energy and the angular momentum of a Kerr black hole are calculated. The subsequent analysis of the results obtained demonstrates the importance of the choice of the frame.
Resumo:
Includes bibliography
Resumo:
Goal Programming (GP) is an important analytical approach devised to solve many realworld problems. The first GP model is known as Weighted Goal Programming (WGP). However, Multi-Choice Aspirations Level (MCAL) problems cannot be solved by current GP techniques. In this paper, we propose a Multi-Choice Mixed Integer Goal Programming model (MCMI-GP) for the aggregate production planning of a Brazilian sugar and ethanol milling company. The MC-MIGP model was based on traditional selection and process methods for the design of lots, representing the production system of sugar, alcohol, molasses and derivatives. The research covers decisions on the agricultural and cutting stages, sugarcane loading and transportation by suppliers and, especially, energy cogeneration decisions; that is, the choice of production process, including storage stages and distribution. The MCMIGP allows decision-makers to set multiple aspiration levels for their problems in which the more/higher, the better and the less/lower, the better in the aspiration levels are addressed. An application of the proposed model for real problems in a Brazilian sugar and ethanol mill was conducted; producing interesting results that are herein reported and commented upon. Also, it was made a comparison between MCMI GP and WGP models using these real cases. © 2013 Elsevier Inc.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
There are strong uncertainties regarding LAI dynamics in forest ecosystems in response to climate change. While empirical growth & yield models (G&YMs) provide good estimations of tree growth at the stand level on a yearly to decennial scale, process-based models (PBMs) use LAI dynamics as a key variable for enabling the accurate prediction of tree growth over short time scales. Bridging the gap between PBMs and G&YMs could improve the prediction of forest growth and, therefore, carbon, water and nutrient fluxes by combining modeling approaches at the stand level.Our study aimed to estimate monthly changes of leaf area in response to climate variations from sparse measurements of foliage area and biomass. A leaf population probabilistic model (SLCD) was designed to simulate foliage renewal. The leaf population was distributed in monthly cohorts, and the total population size was limited depending on forest age and productivity. Foliage dynamics were driven by a foliation function and the probabilities ruling leaf aging or fall. Their formulation depends on the forest environment.The model was applied to three tree species growing under contrasting climates and soil types. In tropical Brazilian evergreen broadleaf eucalypt plantations, the phenology was described using 8 parameters. A multi-objective evolutionary algorithm method (MOEA) was used to fit the model parameters on litterfall and LAI data over an entire stand rotation. Field measurements from a second eucalypt stand were used to validate the model. Seasonal LAI changes were accurately rendered for both sites (R-2 = 0.898 adjustment, R-2 = 0.698 validation). Litterfall production was correctly simulated (R-2 = 0.562, R-2 = 0.4018 validation) and may be improved by using additional validation data in future work. In two French temperate deciduous forests (beech and oak), we adapted phenological sub-modules of the CASTANEA model to simulate canopy dynamics, and SLCD was validated using LAI measurements. The phenological patterns were simulated with good accuracy in the two cases studied. However, IA/max was not accurately simulated in the beech forest, and further improvement is required.Our probabilistic approach is expected to contribute to improving predictions of LAI dynamics. The model formalism is general and suitable to broadleaf forests for a large range of ecological conditions. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Patterns of species interactions affect the dynamics of food webs. An important component of species interactions that is rarely considered with respect to food webs is the strengths of interactions, which may affect both structure and dynamics. In natural systems, these strengths are variable, and can be quantified as probability distributions. We examined how variation in strengths of interactions can be described hierarchically, and how this variation impacts the structure of species interactions in predator-prey networks, both of which are important components of ecological food webs. The stable isotope ratios of predator and prey species may be particularly useful for quantifying this variability, and we show how these data can be used to build probabilistic predator-prey networks. Moreover, the distribution of variation in strengths among interactions can be estimated from a limited number of observations. This distribution informs network structure, especially the key role of dietary specialization, which may be useful for predicting structural properties in systems that are difficult to observe. Finally, using three mammalian predator-prey networks ( two African and one Canadian) quantified from stable isotope data, we show that exclusion of link-strength variability results in biased estimates of nestedness and modularity within food webs, whereas the inclusion of body size constraints only marginally increases the predictive accuracy of the isotope-based network. We find that modularity is the consequence of strong link-strengths in both African systems, while nestedness is not significantly present in any of the three predator-prey networks.
Resumo:
Item response theory (IRT) comprises a set of statistical models which are useful in many fields, especially when there is an interest in studying latent variables (or latent traits). Usually such latent traits are assumed to be random variables and a convenient distribution is assigned to them. A very common choice for such a distribution has been the standard normal. Recently, Azevedo et al. [Bayesian inference for a skew-normal IRT model under the centred parameterization, Comput. Stat. Data Anal. 55 (2011), pp. 353-365] proposed a skew-normal distribution under the centred parameterization (SNCP) as had been studied in [R. B. Arellano-Valle and A. Azzalini, The centred parametrization for the multivariate skew-normal distribution, J. Multivariate Anal. 99(7) (2008), pp. 1362-1382], to model the latent trait distribution. This approach allows one to represent any asymmetric behaviour concerning the latent trait distribution. Also, they developed a Metropolis-Hastings within the Gibbs sampling (MHWGS) algorithm based on the density of the SNCP. They showed that the algorithm recovers all parameters properly. Their results indicated that, in the presence of asymmetry, the proposed model and the estimation algorithm perform better than the usual model and estimation methods. Our main goal in this paper is to propose another type of MHWGS algorithm based on a stochastic representation (hierarchical structure) of the SNCP studied in [N. Henze, A probabilistic representation of the skew-normal distribution, Scand. J. Statist. 13 (1986), pp. 271-275]. Our algorithm has only one Metropolis-Hastings step, in opposition to the algorithm developed by Azevedo et al., which has two such steps. This not only makes the implementation easier but also reduces the number of proposal densities to be used, which can be a problem in the implementation of MHWGS algorithms, as can be seen in [R.J. Patz and B.W. Junker, A straightforward approach to Markov Chain Monte Carlo methods for item response models, J. Educ. Behav. Stat. 24(2) (1999), pp. 146-178; R. J. Patz and B. W. Junker, The applications and extensions of MCMC in IRT: Multiple item types, missing data, and rated responses, J. Educ. Behav. Stat. 24(4) (1999), pp. 342-366; A. Gelman, G.O. Roberts, and W.R. Gilks, Efficient Metropolis jumping rules, Bayesian Stat. 5 (1996), pp. 599-607]. Moreover, we consider a modified beta prior (which generalizes the one considered in [3]) and a Jeffreys prior for the asymmetry parameter. Furthermore, we study the sensitivity of such priors as well as the use of different kernel densities for this parameter. Finally, we assess the impact of the number of examinees, number of items and the asymmetry level on the parameter recovery. Results of the simulation study indicated that our approach performed equally as well as that in [3], in terms of parameter recovery, mainly using the Jeffreys prior. Also, they indicated that the asymmetry level has the highest impact on parameter recovery, even though it is relatively small. A real data analysis is considered jointly with the development of model fitting assessment tools. The results are compared with the ones obtained by Azevedo et al. The results indicate that using the hierarchical approach allows us to implement MCMC algorithms more easily, it facilitates diagnosis of the convergence and also it can be very useful to fit more complex skew IRT models.
Resumo:
The choice of an appropriate family of linear models for the analysis of longitudinal data is often a matter of concern for practitioners. To attenuate such difficulties, we discuss some issues that emerge when analyzing this type of data via a practical example involving pretestposttest longitudinal data. In particular, we consider log-normal linear mixed models (LNLMM), generalized linear mixed models (GLMM), and models based on generalized estimating equations (GEE). We show how some special features of the data, like a nonconstant coefficient of variation, may be handled in the three approaches and evaluate their performance with respect to the magnitude of standard errors of interpretable and comparable parameters. We also show how different diagnostic tools may be employed to identify outliers and comment on available software. We conclude by noting that the results are similar, but that GEE-based models may be preferable when the goal is to compare the marginal expected responses.
Resumo:
Fraud is a global problem that has required more attention due to an accentuated expansion of modern technology and communication. When statistical techniques are used to detect fraud, whether a fraud detection model is accurate enough in order to provide correct classification of the case as a fraudulent or legitimate is a critical factor. In this context, the concept of bootstrap aggregating (bagging) arises. The basic idea is to generate multiple classifiers by obtaining the predicted values from the adjusted models to several replicated datasets and then combining them into a single predictive classification in order to improve the classification accuracy. In this paper, for the first time, we aim to present a pioneer study of the performance of the discrete and continuous k-dependence probabilistic networks within the context of bagging predictors classification. Via a large simulation study and various real datasets, we discovered that the probabilistic networks are a strong modeling option with high predictive capacity and with a high increment using the bagging procedure when compared to traditional techniques. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Abstract Background A large number of probabilistic models used in sequence analysis assign non-zero probability values to most input sequences. To decide when a given probability is sufficient the most common way is bayesian binary classification, where the probability of the model characterizing the sequence family of interest is compared to that of an alternative probability model. We can use as alternative model a null model. This is the scoring technique used by sequence analysis tools such as HMMER, SAM and INFERNAL. The most prevalent null models are position-independent residue distributions that include: the uniform distribution, genomic distribution, family-specific distribution and the target sequence distribution. This paper presents a study to evaluate the impact of the choice of a null model in the final result of classifications. In particular, we are interested in minimizing the number of false predictions in a classification. This is a crucial issue to reduce costs of biological validation. Results For all the tests, the target null model presented the lowest number of false positives, when using random sequences as a test. The study was performed in DNA sequences using GC content as the measure of content bias, but the results should be valid also for protein sequences. To broaden the application of the results, the study was performed using randomly generated sequences. Previous studies were performed on aminoacid sequences, using only one probabilistic model (HMM) and on a specific benchmark, and lack more general conclusions about the performance of null models. Finally, a benchmark test with P. falciparum confirmed these results. Conclusions Of the evaluated models the best suited for classification are the uniform model and the target model. However, the use of the uniform model presents a GC bias that can cause more false positives for candidate sequences with extreme compositional bias, a characteristic not described in previous studies. In these cases the target model is more dependable for biological validation due to its higher specificity.
Resumo:
Structural durability is an important criterion that must be evaluated for every type of structure. Concerning reinforced concrete members, chloride diffusion process is widely used to evaluate durability, especially when these structures are constructed in aggressive atmospheres. The chloride ingress triggers the corrosion of reinforcements; therefore, by modelling this phenomenon, the corrosion process can be better evaluated as well as the structural durability. The corrosion begins when a threshold level of chloride concentration is reached at the steel bars of reinforcements. Despite the robustness of several models proposed in literature, deterministic approaches fail to predict accurately the corrosion time initiation due the inherent randomness observed in this process. In this regard, structural durability can be more realistically represented using probabilistic approaches. This paper addresses the analyses of probabilistic corrosion time initiation in reinforced concrete structures exposed to chloride penetration. The chloride penetration is modelled using the Fick's diffusion law. This law simulates the chloride diffusion process considering time-dependent effects. The probability of failure is calculated using Monte Carlo simulation and the first order reliability method, with a direct coupling approach. Some examples are considered in order to study these phenomena. Moreover, a simplified method is proposed to determine optimal values for concrete cover.