887 resultados para alternative model
Resumo:
Wednesday 2nd April 2014 Speaker(s): Stefan Decker Time: 02/04/2014 11:00-11:50 Location: B2/1083 File size: 897 Mb Abstract Ontologies have been promoted and used for knowledge sharing. Several models for representing ontologies have been developed in the Knowledge Representation field, in particular associated with the Semantic Web. In my talk I will summarise developments so far, and will argue that the currently advocated approaches miss certain basic properties of current distributed information sharing infrastructures (read: the Web and the Internet). I will sketch an alternative model aiming to support knowledge sharing and re-use on a global basis.
Resumo:
The paper provides an alternative model for insurance market with three types of agents: households, providers of a service and insurance companies. Households have uncertainty about future leveIs of income. Providers, if hired by a household, perform a diagnoses and privately learn a signal. For each signal there is a procedure that maximizes the likelihood of the household obtaining the good state of nature. The paper assumes that providers care about their income and also about the likelihood households will obtain the good state of nature (sympathy assumption). This assumption is satisfied if, for example, they care about their reputation or if there are possible litigation costs in case they do not use the appropriate procedure. Finally, insurance companies offer contracts to both providers and households. The paper provides sufficient conditions for the existence of equilibrium and shows that the sympathy assumption 1eads to a 10ss of welfare for the households due to the need to incentive providers to choose the least expensive treatment.
Resumo:
In this paper is shown the development of a transmission line, based on discrete circuit elements that provide responses directly in the time domain and phase. This model is valid for ideally transposed rows represent the phases of each of the small line segments are separated in their modes of propagation and the voltage and current are calculated at the modal field. However, the conversion phase-mode-phase is inserted in the state equations which describe the currents and voltages along the line of which there is no need to know the user of the model representation of the theory in the field lines modal.
Resumo:
Evaluations of measurement invariance provide essential construct validity evidence. However, the quality of such evidence is partly dependent upon the validity of the resulting statistical conclusions. The presence of Type I or Type II errors can render measurement invariance conclusions meaningless. The purpose of this study was to determine the effects of categorization and censoring on the behavior of the chi-square/likelihood ratio test statistic and two alternative fit indices (CFI and RMSEA) under the context of evaluating measurement invariance. Monte Carlo simulation was used to examine Type I error and power rates for the (a) overall test statistic/fit indices, and (b) change in test statistic/fit indices. Data were generated according to a multiple-group single-factor CFA model across 40 conditions that varied by sample size, strength of item factor loadings, and categorization thresholds. Seven different combinations of model estimators (ML, Yuan-Bentler scaled ML, and WLSMV) and specified measurement scales (continuous, censored, and categorical) were used to analyze each of the simulation conditions. As hypothesized, non-normality increased Type I error rates for the continuous scale of measurement and did not affect error rates for the categorical scale of measurement. Maximum likelihood estimation combined with a categorical scale of measurement resulted in more correct statistical conclusions than the other analysis combinations. For the continuous and censored scales of measurement, the Yuan-Bentler scaled ML resulted in more correct conclusions than normal-theory ML. The censored measurement scale did not offer any advantages over the continuous measurement scale. Comparing across fit statistics and indices, the chi-square-based test statistics were preferred over the alternative fit indices, and ΔRMSEA was preferred over ΔCFI. Results from this study should be used to inform the modeling decisions of applied researchers. However, no single analysis combination can be recommended for all situations. Therefore, it is essential that researchers consider the context and purpose of their analyses.
Resumo:
A research has been carried out in two-lanehighways in the Madrid Region to propose an alternativemodel for the speed-flowrelationship using regular loop data. The model is different in shape and, in some cases, slopes with respect to the contents of Highway Capacity Manual (HCM). A model is proposed for a mountainous area road, something for which the HCM does not provide explicitly a solution. The problem of a mountain road with high flows to access a popular recreational area is discussed, and some solutions are proposed. Up to 7 one-way sections of two-lanehighways have been selected, aiming at covering a significant number of different characteristics, to verify the proposed method the different classes of highways on which the Manual classifies them. In order to enunciate the model and to verify the basic variables of these types of roads a high number of data have been used. The counts were collected in the same way that the Madrid Region Highway Agency performs their counts. A total of 1.471 hours have been collected, in periods of 5 minutes. The models have been verified by means of specific statistical test (R2, T-Student, Durbin-Watson, ANOVA, etc.) and with the diagnostics of the contrast of assumptions (normality, linearity, homoscedasticity and independence). The model proposed for this type of highways with base conditions, can explain the different behaviors as traffic volumes increase, and follows a polynomial multiple regression model of order 3, S shaped. As secondary results of this research, the levels of service and the capacities of this road have been measured with the 2000 HCM methodology, and the results discussed. © 2011 Published by Elsevier Ltd.
Resumo:
Within academic institutions, writing centers are uniquely situated, socially rich sites for exploring learning and literacy. I examine the work of the Michigan Tech Writing Center's UN 1002 World Cultures study teams primarily because student participants and Writing Center coaches are actively engaged in structuring their own learning and meaning-making processes. My research reveals that learning is closely linked to identity formation and leading the teams is an important component of the coaches' educational experiences. I argue that supporting this type of learning requires an expanded understanding of literacy and significant changes to how learning environments are conceptualized and developed. This ethnographic study draws on data collected from recordings and observations of one semester of team sessions, my own experiences as a team coach and UN 1002 teaching assistant, and interviews with Center coaches prior to their graduation. I argue that traditional forms of assessment and analysis emerging from individualized instruction models of learning cannot fully account for the dense configurations of social interactions identified in the Center's program. Instead, I view the Center as an open system and employ social theories of learning and literacy to uncover how the negotiation of meaning in one context influences and is influenced by structures and interactions within as well as beyond its boundaries. I focus on the program design, its enaction in practice, and how engagement in this type of writing center work influences coaches' learning trajectories. I conclude that, viewed as participation in a community of practice, the learning theory informing the program design supports identity formation —a key aspect of learning as argued by Etienne Wenger (1998). The findings of this study challenge misconceptions of peer learning both in writing centers and higher education that relegate peer tutoring to the role of support for individualized models of learning. Instead, this dissertation calls for consideration of new designs that incorporate peer learning as an integral component. Designing learning contexts that cultivate and support the formation of new identities is complex, involves a flexible and opportunistic design structure, and requires the availability of multiple forms of participation and connections across contexts.
Resumo:
2016
Resumo:
This paper presents an alternative model to deal with the problem of optimal energy consumption minimization of non-isothermal systems with variable inlet and outlet temperatures. The model is based on an implicit temperature ordering and the “transshipment model” proposed by Papoulias and Grossmann (1983). It is supplemented with a set of logical relationships related to the relative position of the inlet temperatures of process streams and the dynamic temperature intervals. In the extreme situation of fixed inlet and outlet temperatures, the model reduces to the “transshipment model”. Several examples with fixed and variable temperatures are presented to illustrate the model's performance.
Resumo:
Harmful Algal Blooms (HABs) are a worldwide problem that have been increasing in frequency and extent over the past several decades. HABs severely damage aquatic ecosystems by destroying benthic habitat, reducing invertebrate and fish populations and affecting larger species such as dugong that rely on seagrasses for food. Few statistical models for predicting HAB occurrences have been developed, and in common with most predictive models in ecology, those that have been developed do not fully account for uncertainties in parameters and model structure. This makes management decisions based on these predictions more risky than might be supposed. We used a probit time series model and Bayesian Model Averaging (BMA) to predict occurrences of blooms of Lyngbya majuscula, a toxic cyanophyte, in Deception Bay, Queensland, Australia. We found a suite of useful predictors for HAB occurrence, with Temperature figuring prominently in models with the majority of posterior support, and a model consisting of the single covariate average monthly minimum temperature showed by far the greatest posterior support. A comparison of alternative model averaging strategies was made with one strategy using the full posterior distribution and a simpler approach that utilised the majority of the posterior distribution for predictions but with vastly fewer models. Both BMA approaches showed excellent predictive performance with little difference in their predictive capacity. Applications of BMA are still rare in ecology, particularly in management settings. This study demonstrates the power of BMA as an important management tool that is capable of high predictive performance while fully accounting for both parameter and model uncertainty.
Resumo:
Ocean color measured from satellites provides daily, global estimates of marine inherent optical properties (IOPs). Semi-analytical algorithms (SAAs) provide one mechanism for inverting the color of the water observed by the satellite into IOPs. While numerous SAAs exist, most are similarly constructed and few are appropriately parameterized for all water masses for all seasons. To initiate community-wide discussion of these limitations, NASA organized two workshops that deconstructed SAAs to identify similarities and uniqueness and to progress toward consensus on a unified SAA. This effort resulted in the development of the generalized IOP (GIOP) model software that allows for the construction of different SAAs at runtime by selection from an assortment of model parameterizations. As such, GIOP permits isolation and evaluation of specific modeling assumptions, construction of SAAs, development of regionally tuned SAAs, and execution of ensemble inversion modeling. Working groups associated with the workshops proposed a preliminary default configuration for GIOP (GIOP-DC), with alternative model parameterizations and features defined for subsequent evaluation. In this paper, we: (1) describe the theoretical basis of GIOP; (2) present GIOP-DC and verify its comparable performance to other popular SAAs using both in situ and synthetic data sets; and, (3) quantify the sensitivities of their output to their parameterization. We use the latter to develop a hierarchical sensitivity of SAAs to various model parameterizations, to identify components of SAAs that merit focus in future research, and to provide material for discussion on algorithm uncertainties and future emsemble applications.
Resumo:
Recently, a hybrid distribution function was proposed to describe a plasma species with an enhanced superthermal component. This combines a Cairns-type "nonthermal" form with the Tsallis theory for nonextensive thermodynamics. Using this alternative model, the propagation of arbitrary amplitude ion acoustic solitary waves in a two-component plasma is investigated. From a careful study of the distribution function it is found that the model itself is valid only for a very restricted range in the q-nonextensive parameter and the nonthermality parameter, a. Solitary waves, the amplitude and nature of which depend sensitively on both q and a, can exist within a narrow range of allowable Mach numbers. Both positive and negative potential structures are found, and coexistence may occur. © 2013 American Physical Society.
Resumo:
The objective of this study is to provide an alternative model approach, i.e., artificial neural network (ANN) model, to predict the compositional viscosity of binary mixtures of room temperature ionic liquids (in short as ILs) [C n-mim] [NTf 2] with n=4, 6, 8, 10 in methanol and ethanol over the entire range of molar fraction at a broad range of temperatures from T=293.0328.0K. The results show that the proposed ANN model provides alternative way to predict compositional viscosity successfully with highly improved accuracy and also show its potential to be extensively utilized to predict compositional viscosity over a wide range of temperatures and more complex viscosity compositions, i.e., more complex intermolecular interactions between components in which it would be hard or impossible to establish the analytical model. © 2010 IEEE.
Resumo:
Exploratory and confirmatory factor analyses reported in the French technical manual of the WISC-IV provides evidence supporting a structure with four indices: Verbal Comprehension (VCI), Perceptual Reasoning (PRI), Working Memory (WMI), and Processing Speed (PSI). Although the WISC-IV is more attuned to contemporary theory, it is still not in total accordance with the dominant theory: the Cattell-Horn-Carroll (CHC) theory of cognitive ability. This study was designed to determine whether the French WISC-IV is better described with the four-factor solution or whether an alternative model based on the CHC theory is more appropriate. The intercorrelations matrix reported in the French technical manual was submitted to confirmatory factor analysis. A comparison of competing models suggests that a model based on the CHC theory fits the data better than the current WISC-IV structure. It appears that the French WISC-IV in fact measures six factors: crystallized intelligence (Gc), fluid intelligence (Gf), short-term memory (Gsm), processing speed (Gs), quantitative knowledge (Gq), and visual processing (Gv). We recommend that clinicians interpret the subtests of the French WISC-IV in relation to this CHC model in addition to the four indices.