971 resultados para Generalized Jkr Model
Resumo:
A model of the cognitive process of natural language processing has been developed using the formalism of generalized nets. Following this stage-simulating model, the treatment of information inevitably includes phases, which require joint operations in two knowledge spaces – language and semantics. In order to examine and formalize the relations between the language and the semantic levels of treatment, the language is presented as an information system, conceived on the bases of human cognitive resources, semantic primitives, semantic operators and language rules and data. This approach is applied for modeling a specific grammatical rule – the secondary predication in Russian. Grammatical rules of the language space are expressed as operators in the semantic space. Examples from the linguistics domain are treated and several conclusions for the semantics of the modeled rule are made. The results of applying the information system approach to the language turn up to be consistent with the stages of treatment modeled with the generalized net.
Resumo:
This paper presents an effective decision making system for leak detection based on multiple generalized linear models and clustering techniques. The training data for the proposed decision system is obtained by setting up an experimental pipeline fully operational distribution system. The system is also equipped with data logging for three variables; namely, inlet pressure, outlet pressure, and outlet flow. The experimental setup is designed such that multi-operational conditions of the distribution system, including multi pressure and multi flow can be obtained. We then statistically tested and showed that pressure and flow variables can be used as signature of leak under the designed multi-operational conditions. It is then shown that the detection of leakages based on the training and testing of the proposed multi model decision system with pre data clustering, under multi operational conditions produces better recognition rates in comparison to the training based on the single model approach. This decision system is then equipped with the estimation of confidence limits and a method is proposed for using these confidence limits for obtaining more robust leakage recognition results.
Resumo:
Fuzzy data envelopment analysis (DEA) models emerge as another class of DEA models to account for imprecise inputs and outputs for decision making units (DMUs). Although several approaches for solving fuzzy DEA models have been developed, there are some drawbacks, ranging from the inability to provide satisfactory discrimination power to simplistic numerical examples that handles only triangular fuzzy numbers or symmetrical fuzzy numbers. To address these drawbacks, this paper proposes using the concept of expected value in generalized DEA (GDEA) model. This allows the unification of three models - fuzzy expected CCR, fuzzy expected BCC, and fuzzy expected FDH models - and the ability of these models to handle both symmetrical and asymmetrical fuzzy numbers. We also explored the role of fuzzy GDEA model as a ranking method and compared it to existing super-efficiency evaluation models. Our proposed model is always feasible, while infeasibility problems remain in certain cases under existing super-efficiency models. In order to illustrate the performance of the proposed method, it is first tested using two established numerical examples and compared with the results obtained from alternative methods. A third example on energy dependency among 23 European Union (EU) member countries is further used to validate and describe the efficacy of our approach under asymmetric fuzzy numbers.
Resumo:
This paper presents a novel approach to the computation of primitive geometrical structures, where no prior knowledge about the visual scene is available and a high level of noise is expected. We based our work on the grouping principles of proximity and similarity, of points and preliminary models. The former was realized using Minimum Spanning Trees (MST), on which we apply a stable alignment and goodness of fit criteria. As for the latter, we used spectral clustering of preliminary models. The algorithm can be generalized to various model fitting settings, without tuning of run parameters. Experiments demonstrate the significant improvement in the localization accuracy of models in plane, homography and motion segmentation examples. The efficiency of the algorithm is not dependent on fine tuning of run parameters like most others in the field.
Detecting Precipitation Climate Changes: An Approach Based on a Stochastic Daily Precipitation Model
Resumo:
2002 Mathematics Subject Classification: 62M10.
Resumo:
In non-linear random effects some attention has been very recently devoted to the analysis ofsuitable transformation of the response variables separately (Taylor 1996) or not (Oberg and Davidian 2000) from the transformations of the covariates and, as far as we know, no investigation has been carried out on the choice of link function in such models. In our study we consider the use of a random effect model when a parameterized family of links (Aranda-Ordaz 1981, Prentice 1996, Pregibon 1980, Stukel 1988 and Czado 1997) is introduced. We point out the advantages and the drawbacks associated with the choice of this data-driven kind of modeling. Difficulties in the interpretation of regression parameters, and therefore in understanding the influence of covariates, as well as problems related to loss of efficiency of estimates and overfitting, are discussed. A case study on radiotherapy usage in breast cancer treatment is discussed.
Resumo:
MSC 2010: 46F30, 46F10
Resumo:
Exposure to counter-stereotypic gender role models (e.g., a woman engineer) has been shown to successfully reduce the application of biased gender stereotypes. We tested the hypothesis that such efforts may more generally lessen the application of stereotypic knowledge in other (non-gendered) domains. Specifically, based on the notion that counter-stereotypes can stimulate a lesser reliance on heuristic thinking, we predicted that contesting gender stereotypes would eliminate a more general group prototypicality bias in the selection of leaders. Three studies supported this hypothesis. After exposing participants to a counter-stereotypic gender role model, group prototypicality no longer predicted leadership evaluation and selection. We discuss the implications of these findings for groups and organizations seeking to capitalize on the benefits of an increasingly diverse workforce.
Resumo:
Recent theoretical investigations have demonstrated that the stability of mode-locked solutions of multiple frequency channels depends on the degree of inhomogeneity in gain saturation. In this article, these results are generalized to determine conditions on each of the system parameters necessary for both the stability and the existence of mode-locked pulse solutions for an arbitrary number of frequency channels. In particular, we find that the parameters governing saturable intensity discrimination and gain inhomogeneity in the laser cavity also determine the position of bifurcations of solution types. These bifurcations are completely characterized in terms of these parameters. In addition to influencing the stability of mode-locked solutions, we determine a balance between cubic gain and quintic loss, which is necessary for the existence of solutions as well. Furthermore, we determine the critical degree of inhomogeneous gain broadening required to support pulses in multiple-frequency channels. © 2010 The American Physical Society.
Resumo:
OBJECTIVE: The objective of this study was to examine medical illness and anxiety, depressive, and somatic symptoms in older medical patients with generalized anxiety disorder (GAD). METHOD: A case-control study was designed and conducted in the University of California, San Diego (UCSD) Geriatrics Clinics. A total of fifty-four older medical patients with GAD and 54 matched controls participated. MEASUREMENTS: The measurements used for this study include: Brief Symptom Inventory-18, Mini International Neuropsychiatric Interview, and the Anxiety Disorders Interview Schedule. RESULTS: Older medical patients with GAD reported higher levels of somatic symptoms, anxiety, and depression than other older adults, as well as higher rates of diabetes and gastrointestinal conditions. In a multivariate model that included somatic symptoms, medical conditions, and depressive and anxiety symptoms, anxiety symptoms were the only significant predictors of GAD. CONCLUSION: These results suggest first, that older medical patients with GAD do not primarily express distress as somatic symptoms; second, that anxiety symptoms in geriatric patients should not be discounted as a byproduct of medical illness or depression; and third, that older adults with diabetes and gastrointestinal conditions may benefit from screening for anxiety.
Resumo:
Az intertemporális döntések fontos szerepet játszanak a közgazdasági modellezésben, és azt írják le, hogy milyen átváltást alkalmazunk két különböző időpont között. A közgazdasági modellezésben az exponenciális diszkontálás a legelterjedtebb, annak ellenére, hogy az empirikus vizsgálatok alapján gyenge a magyarázó ereje. A gazdaságpszichológiában elterjedt általánosított hiperbolikus diszkontálás viszont nagyon nehezen alkalmazható közgazdasági modellezési célra. Így tudott gyorsan elterjedni a kvázi-hiperbolikus diszkontálási modell, amelyik úgy ragadja meg a főbb pszichológiai jelenségeket, hogy kezelhető marad a modellezés során. A cikkben azt állítjuk, hogy hibás az a megközelítés, hogy hosszú távú döntések esetén, főleg sorozatok esetén helyettesíthető a két hiperbolikus diszkontálás egymással. Így a hosszú távú kérdéseknél érdemes felülvizsgálni a kvázi-hiperbolikus diszkontálással kapott eredményeket, ha azok az általánosított hiperbolikus diszkontálási modellel való helyettesíthetőséget feltételezték. ____ Intertemporal choice is one of the crucial questions in economic modeling and it describes decisions which require trade-offs among outcomes occurring in different points in time. In economic modeling the exponential discounting is the most well known, however it has weak validity in empirical studies. Although according to psychologists generalized hyperbolic discounting has the strongest descriptive validity it is very complex and hard to use in economic models. In response to this challenge quasi-hyperbolic discounting was proposed. It has the most important properties of generalized hyperbolic discounting while tractability remains in analytical modeling. Therefore it is common to substitute generalized hyperbolic discounting with quasi-hyperbolic discounting. This paper argues that the substitution of these two models leads to different conclusions in long term decisions especially in the case of series; hence all the models that use quasi-hyperbolic discounting for long term decisions should be revised if they states that generalized hyperbolic discounting model would have the same conclusion.
Resumo:
Software engineering researchers are challenged to provide increasingly more powerful levels of abstractions to address the rising complexity inherent in software solutions. One new development paradigm that places models as abstraction at the forefront of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code.^ Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process.^ The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources.^ At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM's synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise.^ This dissertation investigates how to decouple the DSK from the MoE and subsequently producing a generic model of execution (GMoE) from the remaining application logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis component of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions.^ This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.^
Resumo:
This thesis involves two parts. The first is a new-proposed theoretical approach called generalized atoms in molecules (GAIM). The second is a computational study on the deamination reaction of adenine with OH⁻/nH₂O (n=0, 1, 2, 3) and 3H₂O. The GAIM approach aims to solve the energy of each atom variationally in the first step and then to build the energy of a molecule from each atom. Thus the energy of a diatomic molecule (A-B) is formulated as a sum of its atomic energies, EA and EB. Each of these atomic energies is expressed as, EA = Hᴬ + Vₑₑᴬᴬ + 1/2Vₑₑᴬ<>ᴮ EB = Hᴮ + Vₑₑᴮᴮ + 1/2Vₑₑᴬ<>ᴮ where; Hᴬ and Hᴮ are the kinetic and nuclear attraction energy of electrons of atoms A and B, respectively; Vₑₑᴬᴬ and Vₑₑᴮᴮ are the interaction energy between the electrons on atoms A and B, respectively; and Vₑₑᴬ<>ᴮ is the interaction energy between the electrons of atom A with the electrons of atom B. The energy of the molecule is then minimized subject to the following constraint, |ρA(r)dr + |ρB(r)dr = N where ρA(r) and ρB(r) are the electron densities of atoms A and B, respectively, and N is the number of electrons. The initial testing of the performance of GAIM was done through calculating dissociation curves for H₂, LiH, Li₂, BH, HF, HCl, N₂, F₂, and Cl₂. The numerical results show that GAIM performs very well with H₂, LiH, Li₂, BH, HF, and HCl. GAIM shows convergence problems with N₂, F₂, and Cl₂ due to difficulties in reordering the degenerate atomic orbitals Pₓ, Py, and Pz in N, F, and Cl atoms. Further work for the development of GAIM is required. Deamination of adenine results in one of several forms of premutagenic lesions occurring in DNA. In this thesis, mechanisms for the deamination reaction of adenine with OH⁻/nH₂O, (n = 0, 1, 2, 3) and 3H₂O were investigated. HF/6-31G(d), B3LYP/6-31G(d), MP2/6-31G(d), and B3LYP/6-31+G(d) levels of theory were employed to optimize all the geometries. Energies were calculated at the G3MP2B3 and CBS-QB3 levels of theory. The effect of solvent (water) was computed using the polarizable continuum model (PCM). Intrinsic reaction coordinate (IRC) calculations were performed for all transition states. Five pathways were investigated for the deamination reaction of adenine with OH⁻/nH₂O and 3H₂O. The first four pathways (A-D) begin with by deprotonation at the amino group of adenine by OH⁻, while pathway E is initiated by tautomerization of adenine. For all pathways, the next two steps involve the formation of a tetrahedral intermediate followed by dissociation to yield products via a 1,3-hydrogen shift. Deamination with a single OH⁻ has a high activation barrier (190 kJ mol⁻¹ using G3MP2B3 level) for the rate-determining step. Addition of one water molecule reduces this barrier by 68 kJ mol⁻¹ calculated at G3MP2B3 level. Adding more water molecules decreases the overall activation energy of the reaction, but the effect becomes smaller with each additional water molecule. The most plausible mechanism is pathway E, the deamination reaction of adenine with 3H₂O, which has an overall G3MP2B3 activation energy of 139 and 137 kJ mol⁻¹ in the gas phase and PCM, respectively. This barrier is lower than that for the deamination with OH⁻/3H₂O by 6 and 2 kJ mol⁻¹ in the gas phase and PCM, respectively.
Resumo:
It was recently shown [Phys. Rev. Lett. 110, 227201 (2013)] that the critical behavior of the random-field Ising model in three dimensions is ruled by a single universality class. This conclusion was reached only after a proper taming of the large scaling corrections of the model by applying a combined approach of various techniques, coming from the zero-and positive-temperature toolboxes of statistical physics. In the present contribution we provide a detailed description of this combined scheme, explaining in detail the zero-temperature numerical scheme and developing the generalized fluctuation-dissipation formula that allowed us to compute connected and disconnected correlation functions of the model. We discuss the error evolution of our method and we illustrate the infinite limit-size extrapolation of several observables within phenomenological renormalization. We present an extension of the quotients method that allows us to obtain estimates of the critical exponent a of the specific heat of the model via the scaling of the bond energy and we discuss the self-averaging properties of the system and the algorithmic aspects of the maximum-flow algorithm used.
Resumo:
Mixtures of Zellner's g-priors have been studied extensively in linear models and have been shown to have numerous desirable properties for Bayesian variable selection and model averaging. Several extensions of g-priors to Generalized Linear Models (GLMs) have been proposed in the literature; however, the choice of prior distribution of g and resulting properties for inference have received considerably less attention. In this paper, we extend mixtures of g-priors to GLMs by assigning the truncated Compound Confluent Hypergeometric (tCCH) distribution to 1/(1+g) and illustrate how this prior distribution encompasses several special cases of mixtures of g-priors in the literature, such as the Hyper-g, truncated Gamma, Beta-prime, and the Robust prior. Under an integrated Laplace approximation to the likelihood, the posterior distribution of 1/(1+g) is in turn a tCCH distribution, and approximate marginal likelihoods are thus available analytically. We discuss the local geometric properties of the g-prior in GLMs and show that specific choices of the hyper-parameters satisfy the various desiderata for model selection proposed by Bayarri et al, such as asymptotic model selection consistency, information consistency, intrinsic consistency, and measurement invariance. We also illustrate inference using these priors and contrast them to others in the literature via simulation and real examples.