886 resultados para Nested Model Structure


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Evolutionary selection of sequences is studied with a knowledge-based Hamiltonian to find the design principle for folding to a model protein structure. With sequences selected by naive energy minimization, the model structure tends to be unstable and the folding ability is low. Sequences with high folding ability have only the low-lying energy minimum but also an energy landscape which is similar to that found for the native sequence over a wide region of the conformation space. Though there is a large fluctuation in foldable sequences, the hydrophobicity pattern and the glycine locations are preserved among them. Implications of the design principle for the molecular mechanism of folding are discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This package includes various Mata functions. kern(): various kernel functions; kint(): kernel integral functions; kdel0(): canonical bandwidth of kernel; quantile(): quantile function; median(): median; iqrange(): inter-quartile range; ecdf(): cumulative distribution function; relrank(): grade transformation; ranks(): ranks/cumulative frequencies; freq(): compute frequency counts; histogram(): produce histogram data; mgof(): multinomial goodness-of-fit tests; collapse(): summary statistics by subgroups; _collapse(): summary statistics by subgroups; gini(): Gini coefficient; sample(): draw random sample; srswr(): SRS with replacement; srswor(): SRS without replacement; upswr(): UPS with replacement; upswor(): UPS without replacement; bs(): bootstrap estimation; bs2(): bootstrap estimation; bs_report(): report bootstrap results; jk(): jackknife estimation; jk_report(): report jackknife results; subset(): obtain subsets, one at a time; composition(): obtain compositions, one by one; ncompositions(): determine number of compositions; partition(): obtain partitions, one at a time; npartitionss(): determine number of partitions; rsubset(): draw random subset; rcomposition(): draw random composition; colvar(): variance, by column; meancolvar(): mean and variance, by column; variance0(): population variance; meanvariance0(): mean and population variance; mse(): mean squared error; colmse(): mean squared error, by column; sse(): sum of squared errors; colsse(): sum of squared errors, by column; benford(): Benford distribution; cauchy(): cumulative Cauchy-Lorentz dist.; cauchyden(): Cauchy-Lorentz density; cauchytail(): reverse cumulative Cauchy-Lorentz; invcauchy(): inverse cumulative Cauchy-Lorentz; rbinomial(): generate binomial random numbers; cebinomial(): cond. expect. of binomial r.v.; root(): Brent's univariate zero finder; nrroot(): Newton-Raphson zero finder; finvert(): univariate function inverter; integrate_sr(): univariate function integration (Simpson's rule); integrate_38(): univariate function integration (Simpson's 3/8 rule); ipolate(): linear interpolation; polint(): polynomial inter-/extrapolation; plot(): Draw twoway plot; _plot(): Draw twoway plot; panels(): identify nested panel structure; _panels(): identify panel sizes; npanels(): identify number of panels; nunique(): count number of distinct values; nuniqrows(): count number of unique rows; isconstant(): whether matrix is constant; nobs(): number of observations; colrunsum(): running sum of each column; linbin(): linear binning; fastlinbin(): fast linear binning; exactbin(): exact binning; makegrid(): equally spaced grid points; cut(): categorize data vector; posof(): find element in vector; which(): positions of nonzero elements; locate(): search an ordered vector; hunt(): consecutive search; cond(): matrix conditional operator; expand(): duplicate single rows/columns; _expand(): duplicate rows/columns in place; repeat(): duplicate contents as a whole; _repeat(): duplicate contents in place; unorder2(): stable version of unorder(); jumble2(): stable version of jumble(); _jumble2(): stable version of _jumble(); pieces(): break string into pieces; npieces(): count number of pieces; _npieces(): count number of pieces; invtokens(): reverse of tokens(); realofstr(): convert string into real; strexpand(): expand string argument; matlist(): display a (real) matrix; insheet(): read spreadsheet file; infile(): read free-format file; outsheet(): write spreadsheet file; callf(): pass optional args to function; callf_setup(): setup for mm_callf().

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A much-revised Quaternary stratigraphy is presented for ignimbrites and pumice fall deposits of the Bandas del Sur, in southern Tenerife. New Ar-41/Ar-39 data obtained for the Arico, Granadilla, Fasnia, Poris, La Caleta and Abrigo formations are presented, allowing correlation with previously dated offshore marine ashfall layers and volcaniclastic sediments. We also provide a minimum age of 287 +/- 7 ka for a major sector collapse event at the Gaimar valley. The Bandas del Sur succession includes more than seven widespread ignimbrite sheets that have similar characteristics, including widespread basal Plinian layers, predominantly phonolite composition, ignimbrites with similar extensive geographic distributions, thin condensed veneers with abundant diffuse bedding and complex lateral and vertical grading patterns, lateral gradations into localized massive facies within palaeo-wadis, and widespread lithic breccia layers that probably record caldera-forming eruptions. Each ignimbrite sheet records substantial bypassing of pyroclastic material into the ocean. The succession indicates that Las Canadas volcano underwent a series of major explosive eruptions, each starting with a Plinian phase followed by emplacement of ignimbrites and thin ash layers, some of coignimbrite origin. Several of the ignimbrite sheets are compositionally zoned and contain subordinate mafic pumices and banded pumices indicative of magma mingling immediately prior to eruption. Because passage of each pyroclastic density current was characterized by phases of non-deposition and erosion, the entire course of each eruption is incompletely recorded at any one location, accounting for some previously perceived differences between the units. Because each current passed into the ocean, estimating eruption volumes is virtually impossible. Nevertheless, the consistent widespread distributions and the presence of lithic breccias within most of the ignimbrite sheets suggest that at least seven caldera collapse eruptions are recorded in the Bandas del Sur succession and probably formed a complex, nested collapse structure. Detailed field relationships show that extensive ignimbrite sheets (e.g. the Arico, Poris and La Caleta formations) relate to previously unrecognized caldera collapse events. We envisage that the evolution of the nested Las Cahadas caldera is more complex than previously thought and involved a protracted history of successive ignimbrite-related caldera collapse events, and large sector collapse events, interspersed with edifice-building phases.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There has been an abundance of literature on the modelling of hydrocyclones over the past 30 years. However, in the comminution area at least, the more popular commercially available packages (e.g. JKSimMet, Limn, MODSIM) use the models developed by Nageswararao and Plitt in the 1970s, either as published at that time, or with minor modification. With the benefit of 30 years of hindsight, this paper discusses the assumptions and approximations used in developing these models. Differences in model structure and the choice of dependent and independent variables are also considered. Redundancies are highlighted and an assessment made of the general applicability of each of the models, their limitations and the sources of error in their model predictions. This paper provides the latest version of the Nageswararao model based on the above analysis, in a form that can readily be implemented in any suitable programming language, or within a spreadsheet. The Plitt model is also presented in similar form. (C) 2004 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Data on the occurrence of species are widely used to inform the design of reserve networks. These data contain commission errors (when a species is mistakenly thought to be present) and omission errors (when a species is mistakenly thought to be absent), and the rates of the two types of error are inversely related. Point locality data can minimize commission errors, but those obtained from museum collections are generally sparse, suffer from substantial spatial bias and contain large omission errors. Geographic ranges generate large commission errors because they assume homogenous species distributions. Predicted distribution data make explicit inferences on species occurrence and their commission and omission errors depend on model structure, on the omission of variables that determine species distribution and on data resolution. Omission errors lead to identifying networks of areas for conservation action that are smaller than required and centred on known species occurrences, thus affecting the comprehensiveness, representativeness and efficiency of selected areas. Commission errors lead to selecting areas not relevant to conservation, thus affecting the representativeness and adequacy of reserve networks. Conservation plans should include an estimation of commission and omission errors in underlying species data and explicitly use this information to influence conservation planning outcomes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Methods of dynamic modelling and analysis of structures, for example the finite element method, are well developed. However, it is generally agreed that accurate modelling of complex structures is difficult and for critical applications it is necessary to validate or update the theoretical models using data measured from actual structures. The techniques of identifying the parameters of linear dynamic models using Vibration test data have attracted considerable interest recently. However, no method has received a general acceptance due to a number of difficulties. These difficulties are mainly due to (i) Incomplete number of Vibration modes that can be excited and measured, (ii) Incomplete number of coordinates that can be measured, (iii) Inaccuracy in the experimental data (iv) Inaccuracy in the model structure. This thesis reports on a new approach to update the parameters of a finite element model as well as a lumped parameter model with a diagonal mass matrix. The structure and its theoretical model are equally perturbed by adding mass or stiffness and the incomplete number of eigen-data is measured. The parameters are then identified by an iterative updating of the initial estimates, by sensitivity analysis, using eigenvalues or both eigenvalues and eigenvectors of the structure before and after perturbation. It is shown that with a suitable choice of the perturbing coordinates exact parameters can be identified if the data and the model structure are exact. The theoretical basis of the technique is presented. To cope with measurement errors and possible inaccuracies in the model structure, a well known Bayesian approach is used to minimize the least squares difference between the updated and the initial parameters. The eigen-data of the structure with added mass or stiffness is also determined using the frequency response data of the unmodified structure by a structural modification technique. Thus, mass or stiffness do not have to be added physically. The mass-stiffness addition technique is demonstrated by simulation examples and Laboratory experiments on beams and an H-frame.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The properties of statistical tests for hypotheses concerning the parameters of the multifractal model of asset returns (MMAR) are investigated, using Monte Carlo techniques. We show that, in the presence of multifractality, conventional tests of long memory tend to over-reject the null hypothesis of no long memory. Our test addresses this issue by jointly estimating long memory and multifractality. The estimation and test procedures are applied to exchange rate data for 12 currencies. Among the nested model specifications that are investigated, in 11 out of 12 cases, daily returns are most appropriately characterized by a variant of the MMAR that applies a multifractal time-deformation process to NIID returns. There is no evidence of long memory.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Several levels of complexity are available for modelling of wastewater treatment plants. Modelling local effects rely on computational fluid dynamics (CFD) approaches whereas activated sludge models (ASM) represent the global methodology. By applying both modelling approaches to pilot plant and full scale systems, this paper evaluates the value of each method and especially their potential combination. Model structure identification for ASM is discussed based on a full-scale closed loop oxidation ditch modelling. It is illustrated how and for what circumstances information obtained via CFD (computational fluid dynamics) analysis, residence time distribution (RTD) and other experimental means can be used. Furthermore, CFD analysis of the multiphase flow mechanisms is employed to obtain a correct description of the oxygenation capacity of the system studied, including an easy implementation of this information in the classical ASM modelling (e.g. oxygen transfer). The combination of CFD and activated sludge modelling of wastewater treatment processes is applied to three reactor configurations, a perfectly mixed reactor, a pilot scale activated sludge basin (ASB) and a real scale ASB. The application of the biological models to the CFD model is validated against experimentation for the pilot scale ASB and against a classical global ASM model response. A first step in the evaluation of the potential of the combined CFD-ASM model is performed using a full scale oxidation ditch system as testing scenario.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Land use and transportation interaction has been a research topic for several decades. There have been efforts to identify impacts of transportation on land use from several different perspectives. One focus has been the role of transportation improvements in encouraging new land developments or relocation of activities due to improved accessibility. The impacts studied have included property values and increased development. Another focus has been on the changes in travel behavior due to better mobility and accessibility. Most studies to date have been conducted in metropolitan level, thus unable to account for interactions spatially and temporally at smaller geographic scales. ^ In this study, a framework for studying the temporal interactions between transportation and land use was proposed and applied to three selected corridor areas in Miami-Dade County, Florida. The framework consists of two parts: one is developing of temporal data and the other is applying time series analysis to this temporal data to identify their dynamic interactions. Temporal GIS databases were constructed and used to compile building permit data and transportation improvement projects. Two types of time series analysis approaches were utilized: univariate models and multivariate models. Time series analysis is designed to describe the dynamic consequences of time series by developing models and forecasting the future of the system based on historical trends. Model estimation results from the selected corridors were then compared. ^ It was found that the time series models predicted residential development better than commercial development. It was also found that results from three study corridors varied in terms of the magnitude of impacts, length of lags, significance of the variables, and the model structure. Long-run effect or cumulated impact of transportation improvement on land developments was also measured with time series techniques. The study offered evidence that congestion negatively impacted development and transportation investments encouraged land development. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Despite the importance of mangrove ecosystems in the global carbon budget, the relationships between environmental drivers and carbon dynamics in these forests remain poorly understood. This limited understanding is partly a result of the challenges associated with in situ flux studies. Tower-based CO2 eddy covariance (EC) systems are installed in only a few mangrove forests worldwide, and the longest EC record from the Florida Everglades contains less than 9 years of observations. A primary goal of the present study was to develop a methodology to estimate canopy-scale photosynthetic light use efficiency in this forest. These tower-based observations represent a basis for associating CO2 fluxes with canopy light use properties, and thus provide the means for utilizing satellite-based reflectance data for larger scale investigations. We present a model for mangrove canopy light use efficiency utilizing the enhanced green vegetation index (EVI) derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) that is capable of predicting changes in mangrove forest CO2 fluxes caused by a hurricane disturbance and changes in regional environmental conditions, including temperature and salinity. Model parameters are solved for in a Bayesian framework. The model structure requires estimates of ecosystem respiration (RE), and we present the first ever tower-based estimates of mangrove forest RE derived from nighttime CO2 fluxes. Our investigation is also the first to show the effects of salinity on mangrove forest CO2 uptake, which declines 5% per each 10 parts per thousand (ppt) increase in salinity. Light use efficiency in this forest declines with increasing daily photosynthetic active radiation, which is an important departure from the assumption of constant light use efficiency typically applied in satellite-driven models. The model developed here provides a framework for estimating CO2 uptake by these forests from reflectance data and information about environmental conditions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This research analyses the components of the organizational structure of the UFRN (Rio Grande do Norte Federal University) and to what extent they affect organizational performance. The study, classified as exploratory and descriptive, was conducted in two phases. The first phase consists of a pilot test to refine the research instrument and to identify the latent components of the organizational structure, and the second to characterize these components and thereby establish relationships with organizational performance. In the first phase, the research was conducted in 20 UFRN organizational units with the participation of 84 employees between technical-administrative and teachers, after considering missing values and outliers, while the second phase occurred in two stages: one conducted with 279 valid cases, consisting of technical-administrative and teachers of 37 UFRN units, and another with 112 managers of the institution in the 49 units identified in this research. The instrument adopted in the first phase was composed of 36 indicators of organizational structure, with six extracted and adapted from the instrument developed by Medeiros (2003) and 30 prepared based on the literature review, from Mintzberg (2012), Hall (1984), Vasconcellos and Hemsley (1997) and Seiffert and Costa (2007) and 7 performance indicators adapted from Fleury and Mills (2006), Vieira and Vieira (2003) and Kaplan and Norton (1997) from the self-assessment instrument in use by the university. In this stage the data were analyzed using the techniques of factor analysis and reliability analysis by means of Cronbach’s alpha, aiming to extract the factors representing the components of the organizational structure. In step 1 of the second phase, the instrument, refined and reduced in the previous phase, with 24 variables of organizational structure and 6 for performance was used, while in step 2, a semi-structured interview guide with questions, organized into nine organizational structure elements, was adopted aiming to gather information to understand the relationship of structure to performance of the UFRN. The techniques used in the second phase, as a whole, were factor analysis and reliability analysis to characterize the components extracted in the previous phase and to validate the performance variables and correlation analysis, regression and content analysis to establish and understand the relationship between structure and performance. The results showed, in the two stages, six latent components of organizational structure in the context under study: training and internalization, communication, hierarchy, decentralization, formalization and departmentalization - with high levels of Cronbach's alpha indexes - which can thereby be characterized as components of UFRN structure. Six performance indicators were validated in this study, showing them as efficient and highly reliable. Finally, it was found that the formalization, communication, decentralization, training and internalization components positively affect UFRN performance, while departmentalization has an adverse affect and hierarchy did not show a significant relationship. The results achieved in this work are important in future studies to support the development of a model structure that represents the specifics of the university

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The objective of this paper is to analyse the effects of international R&D cooperation on firms’ economic performance. Our approach, based on a complete data set with information about Spanish participants in research joint ventures supported by the EU Framework Programme during the period 1995-2005, establishes a recursive model structure to capture the relationship between R&D cooperation, knowledge generation and economic results, which are measured by labour productivity. In the analysis we take into account that the participation in this specific type of cooperative projects implies a selection process that includes both the self-selection by participants to join the consortia and the selection of projects by the European Commission to award the public aid. Empirical analysis has confirmed that: (1) R&D co-operation has a positive impact on the technological capacity of firms, captured through intan-gible fixed assets and (2) the technological capacity of firms is positively related to their productivity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The objective of this paper is to analyse the effects of international R&D cooperation on firms’ economic performance. Our approach, based on a complete data set with information about Spanish participants in research joint ventures supported by the EU Framework Programme during the period 1995-2005, establishes a recursive model structure to capture the relationship between R&D cooperation, knowledge generation and economic results, which are measured by labour productivity. In the analysis we take into account that the participation in this specific type of cooperative projects implies a selection process that includes both the self-selection by participants to join the consortia and the selection of projects by the European Commission to award the public aid. Empirical analysis has confirmed that: (1) R&D co-operation has a positive impact on the technological capacity of firms, captured through intan-gible fixed assets and (2) the technological capacity of firms is positively related to their productivity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract

Continuous variable is one of the major data types collected by the survey organizations. It can be incomplete such that the data collectors need to fill in the missingness. Or, it can contain sensitive information which needs protection from re-identification. One of the approaches to protect continuous microdata is to sum them up according to different cells of features. In this thesis, I represents novel methods of multiple imputation (MI) that can be applied to impute missing values and synthesize confidential values for continuous and magnitude data.

The first method is for limiting the disclosure risk of the continuous microdata whose marginal sums are fixed. The motivation for developing such a method comes from the magnitude tables of non-negative integer values in economic surveys. I present approaches based on a mixture of Poisson distributions to describe the multivariate distribution so that the marginals of the synthetic data are guaranteed to sum to the original totals. At the same time, I present methods for assessing disclosure risks in releasing such synthetic magnitude microdata. The illustration on a survey of manufacturing establishments shows that the disclosure risks are low while the information loss is acceptable.

The second method is for releasing synthetic continuous micro data by a nonstandard MI method. Traditionally, MI fits a model on the confidential values and then generates multiple synthetic datasets from this model. Its disclosure risk tends to be high, especially when the original data contain extreme values. I present a nonstandard MI approach conditioned on the protective intervals. Its basic idea is to estimate the model parameters from these intervals rather than the confidential values. The encouraging results of simple simulation studies suggest the potential of this new approach in limiting the posterior disclosure risk.

The third method is for imputing missing values in continuous and categorical variables. It is extended from a hierarchically coupled mixture model with local dependence. However, the new method separates the variables into non-focused (e.g., almost-fully-observed) and focused (e.g., missing-a-lot) ones. The sub-model structure of focused variables is more complex than that of non-focused ones. At the same time, their cluster indicators are linked together by tensor factorization and the focused continuous variables depend locally on non-focused values. The model properties suggest that moving the strongly associated non-focused variables to the side of focused ones can help to improve estimation accuracy, which is examined by several simulation studies. And this method is applied to data from the American Community Survey.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La possibilité d’estimer l’impact du changement climatique en cours sur le comportement hydrologique des hydro-systèmes est une nécessité pour anticiper les adaptations inévitables et nécessaires que doivent envisager nos sociétés. Dans ce contexte, ce projet doctoral présente une étude sur l’évaluation de la sensibilité des projections hydrologiques futures à : (i) La non-robustesse de l’identification des paramètres des modèles hydrologiques, (ii) l’utilisation de plusieurs jeux de paramètres équifinaux et (iii) l’utilisation de différentes structures de modèles hydrologiques. Pour quantifier l’impact de la première source d’incertitude sur les sorties des modèles, quatre sous-périodes climatiquement contrastées sont tout d’abord identifiées au sein des chroniques observées. Les modèles sont calés sur chacune de ces quatre périodes et les sorties engendrées sont analysées en calage et en validation en suivant les quatre configurations du Different Splitsample Tests (Klemeš, 1986;Wilby, 2005; Seiller et al. (2012);Refsgaard et al. (2014)). Afin d’étudier la seconde source d’incertitude liée à la structure du modèle, l’équifinalité des jeux de paramètres est ensuite prise en compte en considérant pour chaque type de calage les sorties associées à des jeux de paramètres équifinaux. Enfin, pour évaluer la troisième source d’incertitude, cinq modèles hydrologiques de différents niveaux de complexité sont appliqués (GR4J, MORDOR, HSAMI, SWAT et HYDROTEL) sur le bassin versant québécois de la rivière Au Saumon. Les trois sources d’incertitude sont évaluées à la fois dans conditions climatiques observées passées et dans les conditions climatiques futures. Les résultats montrent que, en tenant compte de la méthode d’évaluation suivie dans ce doctorat, l’utilisation de différents niveaux de complexité des modèles hydrologiques est la principale source de variabilité dans les projections de débits dans des conditions climatiques futures. Ceci est suivi par le manque de robustesse de l’identification des paramètres. Les projections hydrologiques générées par un ensemble de jeux de paramètres équifinaux sont proches de celles associées au jeu de paramètres optimal. Par conséquent, plus d’efforts devraient être investis dans l’amélioration de la robustesse des modèles pour les études d’impact sur le changement climatique, notamment en développant les structures des modèles plus appropriés et en proposant des procédures de calage qui augmentent leur robustesse. Ces travaux permettent d’apporter une réponse détaillée sur notre capacité à réaliser un diagnostic des impacts des changements climatiques sur les ressources hydriques du bassin Au Saumon et de proposer une démarche méthodologique originale d’analyse pouvant être directement appliquée ou adaptée à d’autres contextes hydro-climatiques.