84 resultados para Fit


Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the major problems when using non-dedicated volunteer resources in adistributed network is the high volatility of these hosts since they can go offlineor become unavailable at any time without control. Furthermore, the use ofvolunteer resources implies some security issues due to the fact that they aregenerally anonymous entities which we know nothing about. So, how to trustin someone we do not know?.Over the last years an important number of reputation-based trust solutionshave been designed to evaluate the participants' behavior in a system.However, most of these solutions are addressed to P2P and ad-hoc mobilenetworks that may not fit well with other kinds of distributed systems thatcould take advantage of volunteer resources as recent cloud computinginfrastructures.In this paper we propose a first approach to design an anonymous reputationmechanism for CoDeS [1], a middleware for building fogs where deployingservices using volunteer resources. The participants are reputation clients(RC), a reputation authority (RA) and a certification authority (CA). Users needa valid public key certificate from the CA to register to the RA and obtain thedata needed to participate into the system, as now an opaque identifier thatwe call here pseudonym and an initial reputation value that users provide toother users when interacting together. The mechanism prevents not only themanipulation of the provided reputation values but also any disclosure of theusers' identities to any other users or authorities so the anonymity isguaranteed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Scarcities of environmental services are no longer merely a remote hypothesis. Consequently, analysis of their inequalities between nations becomes of paramount importance for the achievement of sustainability in terms either of international policy, or of Universalist ethical principles of equity. This paper aims, on the one hand, at revising methodological aspects of the inequality measurement of certain environmental data and, on the other, at extending the scarce empirical evidence relating to the international distribution of Ecological Footprint (EF), by using a longer EF time series. Most of the techniques currently important in the literature are revised and then tested on EF data with interesting results. We look in depth at Lorenz dominance analyses and consider the underlying properties of different inequality indices. Those indices which fit best with environmental inequality measurements are CV2 and GE(2) because of their neutrality property, however a trade-off may occur when subgroup decompositions are performed. A weighting factor decomposition method is proposed in order to isolate weighting factor changes in inequality growth rates. Finally, the only non-ambiguous way of decomposing inequality by source is the natural decomposition of CV2, which additionally allows the interpretation of marginal term contributions. Empirically, this paper contributes to the environmental inequality measurement of EF: this inequality has been quite stable and its change over time is due to per capita vector changes rather than population changes. Almost the entirety of the EF inequality is explainable by differences in the means between the countries of the World Bank group. This finding suggests that international environmental agreements should be attempted on a regional basis in an attempt to achieve greater consensus between the parties involved. Additionally, source decomposition warns of the dangers of confining CO2 emissions reduction to crop-based energies because of the implications for basic needs satisfaction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Scarcities of environmental services are no longer merely a remote hypothesis. Consequently, analysis of their inequalities between nations becomes of paramount importance for the achievement of sustainability in terms either of international policy, or of Universalist ethical principles of equity. This paper aims, on the one hand, at revising methodological aspects of the inequality measurement of certain environmental data and, on the other, at extending the scarce empirical evidence relating to the international distribution of Ecological Footprint (EF), by using a longer EF time series. Most of the techniques currently important in the literature are revised and then tested on EF data with interesting results. We look in depth at Lorenz dominance analyses and consider the underlying properties of different inequality indices. Those indices which fit best with environmental inequality measurements are CV2 and GE(2) because of their neutrality property, however a trade-off may occur when subgroup decompositions are performed. A weighting factor decomposition method is proposed in order to isolate weighting factor changes in inequality growth rates. Finally, the only non-ambiguous way of decomposing inequality by source is the natural decomposition of CV2, which additionally allows the interpretation of marginal term contributions. Empirically, this paper contributes to the environmental inequality measurement of EF: this inequality has been quite stable and its change over time is due to per capita vector changes rather than population changes. Almost the entirety of the EF inequality is explainable by differences in the means between the countries of the World Bank group. This finding suggests that international environmental agreements should be attempted on a regional basis in an attempt to achieve greater consensus between the parties involved. Additionally, source decomposition warns of the dangers of confining CO2 emissions reduction to crop-based energies because of the implications for basic needs satisfaction. Keywords: ecological footprint; ecological inequality measurement, inequality decomposition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We performed a comprehensive study to assess the fit for purpose of four chromatographic conditions for the determination of six groups of marine lipophilic toxins (okadaic acid and dinophysistoxins, pectenotoxins, azaspiracids, yessotoxins, gymnodimine and spirolides) by LC-MS/MS to select the most suitable conditions as stated by the European Union Reference Laboratory for Marine Biotoxins (EURLMB). For every case, the elution gradient has been optimized to achieve a total run-time cycle of 12 min. We performed a single-laboratory validation for the analysis of three relevant matrices for the seafood aquaculture industry (mussels, pacific oysters and clams), and for sea urchins for which no data about lipophilic toxins have been reported before. Moreover, we have compared the method performance under alkaline conditions using two quantification strategies: the external standard calibration (EXS) and the matrix-matched standard calibration (MMS). Alkaline conditions were the only scenario that allowed detection windows with polarity switching in a 3200 QTrap mass spectrometer, thus the analysis of all toxins can be accomplished in a single run, increasing sample throughput. The limits of quantification under alkaline conditions met the validation requirements established by the EURLMB for all toxins and matrices, while the remaining conditions failed in some cases. The accuracy of the method and the matrix effects where generally dependent on the mobile phases and the seafood species. The MMS had a moderate positive impact on method accuracy for crude extracts, but it showed poor trueness for seafood species other than mussels when analyzing hydrolyzed extracts. Alkaline conditions with EXS and recovery correction for OA were selected as the most proper conditions in the context of our laboratory. This comparative study can help other laboratories to choose the best conditions for the implementation of LC-MS/MS according to their own necessities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A select-divide-and-conquer variational method to approximate configuration interaction (CI) is presented. Given an orthonormal set made up of occupied orbitals (Hartree-Fock or similar) and suitable correlation orbitals (natural or localized orbitals), a large N-electron target space S is split into subspaces S0,S1,S2,...,SR. S0, of dimension d0, contains all configurations K with attributes (energy contributions, etc.) above thresholds T0={T0egy, T0etc.}; the CI coefficients in S0 remain always free to vary. S1 accommodates KS with attributes above T1≤T0. An eigenproblem of dimension d0+d1 for S0+S 1 is solved first, after which the last d1 rows and columns are contracted into a single row and column, thus freezing the last d1 CI coefficients hereinafter. The process is repeated with successive Sj(j≥2) chosen so that corresponding CI matrices fit random access memory (RAM). Davidson's eigensolver is used R times. The final energy eigenvalue (lowest or excited one) is always above the corresponding exact eigenvalue in S. Threshold values {Tj;j=0, 1, 2,...,R} regulate accuracy; for large-dimensional S, high accuracy requires S 0+S1 to be solved outside RAM. From there on, however, usually a few Davidson iterations in RAM are needed for each step, so that Hamiltonian matrix-element evaluation becomes rate determining. One μhartree accuracy is achieved for an eigenproblem of order 24 × 106, involving 1.2 × 1012 nonzero matrix elements, and 8.4×109 Slater determinants

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The classical description of Si oxidation given by Deal and Grove has well-known limitations for thin oxides (below 200 Ã). Among the large number of alternative models published so far, the interfacial emission model has shown the greatest ability to fit the experimental oxidation curves. It relies on the assumption that during oxidation Si interstitials are emitted to the oxide to release strain and that the accumulation of these interstitials near the interface reduces the reaction rate there. The resulting set of differential equations makes it possible to model diverse oxidation experiments. In this paper, we have compared its predictions with two sets of experiments: (1) the pressure dependence for subatmospheric oxygen pressure and (2) the enhancement of the oxidation rate after annealing in inert atmosphere. The result is not satisfactory and raises serious doubts about the model’s correctness

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Olive oil consumption is protective against risk factors for cardiovascular and cancer diseases. A nutrigenomic approach was performed to assess whether changes in gene expression could occur in human peripheral blood mononuclear cells after oli ve oil ingestion at postprandial state. Six healthy male volunteers ingested, at fasting state, 50 ml of olive oil. Prior to intervention a 1-week washout period with a controlled diet and sunflower oil as the only source of fat was followed. During the 3 days before and on the intervention day, a very low-phenolic compound diet was followed. At baseline (0 h) and at post-ingestion (6 h), total RNA was isolated and gene expression (29,082 genes) was evaluated by microarray. From microarray data, nutrient-gene interactions were observed in genes related to metabolism, cellular processes, cancer, and atherosclerosis (e.g. USP48 by 2.16; OGT by 1.68-fold change) and associated processes such as inflammation (e.g. AKAP13 by 2.30; IL-10 by 1.66-fold change) and DNA damage (e.g. DCLRE1C by 1.47; POLK by 1.44- fold change). When results obtained by microarray were verified by qRT-PCR in nine genes, full concordance was achieved only in the case of up-regulated genes. Changes were observed at a real-life dose of olive oil, as it is daily consumed in some Mediterranean areas. Our results support the hypothesis that postprandial protective changes related to olive oil consumption could be mediated through gene expression changes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Virgin olive oil (VOO) is considered to be one of the main components responsible for the health benefits of the Mediterranean diet, particularly against atherosclerosis where peripheral blood mononuclear cells (PBMNCs) play a crucial role in atherosclerosis development and progression. The objective of this article was to identify the PBMNC genes that respond to VOO consumption in order to ascertain the molecular mechanisms underlying the beneficial action of VOO in the prevention of atherosclerosis. Gene expression profiles of PBMNCs from healthy individuals were examined in pooled RNA samples by microarrays after 3 weeks of moderate and regular consumption of VOO, as the main fat source in a diet controlled for antioxidant content. Gene expression was verified by qPCR. The response to VOO consumption was confirmed for individual samples (n = 10) by qPCR for 10 upregulated genes (ADAM17, ALDH1A1, BIRC1, ERCC5, LIAS, OGT, PPARBP, TNFSF10, USP48, and XRCC5). Their putative role in the molecular mechanisms involved in atherosclerosis development and progression is discussed, focusing on a possible relation with VOO consumption. Our data support the hypothesis that 3 weeks of nutritional intervention with VOO supplementation, at doses common in the Mediterranean diet, can alter the expression of genes related to atherosclerosis development and progression.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

 Treball de recerca realitzat per un alumne d’ensenyament secundari i guardonat amb un Premi CIRIT per fomentar l'esperit científic del Jovent l’any 2010. Els objectius del treball han estat, d’una banda decidir si, contràriament al què es cita en la bibliografia, els mètodes osteocronològics són aplicables per a la datació de tortugues de més de 20 anys. D’altra banda, també, determinar si els mètodes dendrològics són aplicables a l'osteocronologia, tant pel que fa a la tècnica de recompte com en relació a la possible informació ecològica (creixement i vida de l'animal). Per això, s’ha analitzat el conjunt de mostres corresponents a 8 metodologies osteocronològiques (54 microfotografies i 25 electromicrofotografies) obtingudes al llarg de dos anys consecutius i s’ha decidit quina és la millor. La millor metodologia osteocronològica de totes les estudiades és la de tinció amb blau de toluïdina sobre mostra d'os sense descalcificar. Les analogies existents entre els anells de creixement dels arbres i les marques de creixement dels ossos estudiats permeten aplicar els mètodes dendrològics a l'osteocronologia, tant pel que fa a la datació com en relació a la informació ecològica. Tenint en compte que hem pogut determinar que la tortuga estudiada té 75 anys, podem afirmar que alguns mètodes osteocronològics són aplicables per a la datació de tortugues d'edats avançades.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Standard economic analysis holds that labor market rigidities are harmfulfor job creation and typically increase unemployment. But many orthodoxreforms of the labor market have proved difficult to implement because ofpolitical opposition. For these reasons it is important to explain why weobserve such regulations. In this paper I outline a theory of how they may arise and why they fit together. This theory is fully developed in aforthcoming book (Saint-Paul (2000)), to which the reader is referred forfurther details.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the application of normal theory methods to the estimation and testing of a general type of multivariate regressionmodels with errors--in--variables, in the case where various data setsare merged into a single analysis and the observable variables deviatepossibly from normality. The various samples to be merged can differ on the set of observable variables available. We show that there is a convenient way to parameterize the model so that, despite the possiblenon--normality of the data, normal--theory methods yield correct inferencesfor the parameters of interest and for the goodness--of--fit test. Thetheory described encompasses both the functional and structural modelcases, and can be implemented using standard software for structuralequations models, such as LISREL, EQS, LISCOMP, among others. An illustration with Monte Carlo data is presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Gray (1988) has put forward a hypothesis on how a national accountingenvironment might reflect the cultural dimensions identified by Hofstede (1980, 1983). A number of studies have tested Gray's hypothesis, including one by Pourjalali and Meek (1995) which identified a match between changes in cultural dimensions and the accounting environment in Iran following the revolution. In this paper we replicate this work in the context of Spain following the death of Franco in 1975 and the emergence of a democratic constitution in 1978. Specifically, we: 1) Consider Gray's hypothesis built on Hofstede's cultural dimensions and review some empirical tests of the hypotheses.2) Building on the work of Hofstede and Gray, we: put forward some hypotheses on how we would expect cultural dimensions to change in Spain with the transition to democracy.3) Review developments in accounting in Spain following the transition to democracy, in order to identify how well these fit with our hypotheses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Standard methods for the analysis of linear latent variable models oftenrely on the assumption that the vector of observed variables is normallydistributed. This normality assumption (NA) plays a crucial role inassessingoptimality of estimates, in computing standard errors, and in designinganasymptotic chi-square goodness-of-fit test. The asymptotic validity of NAinferences when the data deviates from normality has been calledasymptoticrobustness. In the present paper we extend previous work on asymptoticrobustnessto a general context of multi-sample analysis of linear latent variablemodels,with a latent component of the model allowed to be fixed across(hypothetical)sample replications, and with the asymptotic covariance matrix of thesamplemoments not necessarily finite. We will show that, under certainconditions,the matrix $\Gamma$ of asymptotic variances of the analyzed samplemomentscan be substituted by a matrix $\Omega$ that is a function only of thecross-product moments of the observed variables. The main advantage of thisis thatinferences based on $\Omega$ are readily available in standard softwareforcovariance structure analysis, and do not require to compute samplefourth-order moments. An illustration with simulated data in the context ofregressionwith errors in variables will be presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Structural equation models are widely used in economic, socialand behavioral studies to analyze linear interrelationships amongvariables, some of which may be unobservable or subject to measurementerror. Alternative estimation methods that exploit different distributionalassumptions are now available. The present paper deals with issues ofasymptotic statistical inferences, such as the evaluation of standarderrors of estimates and chi--square goodness--of--fit statistics,in the general context of mean and covariance structures. The emphasisis on drawing correct statistical inferences regardless of thedistribution of the data and the method of estimation employed. A(distribution--free) consistent estimate of $\Gamma$, the matrix ofasymptotic variances of the vector of sample second--order moments,will be used to compute robust standard errors and a robust chi--squaregoodness--of--fit squares. Simple modifications of the usual estimateof $\Gamma$ will also permit correct inferences in the case of multi--stage complex samples. We will also discuss the conditions under which,regardless of the distribution of the data, one can rely on the usual(non--robust) inferential statistics. Finally, a multivariate regressionmodel with errors--in--variables will be used to illustrate, by meansof simulated data, various theoretical aspects of the paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We evaluate conditional predictive densities for U.S. output growth and inflationusing a number of commonly used forecasting models that rely on a large number ofmacroeconomic predictors. More specifically, we evaluate how well conditional predictive densities based on the commonly used normality assumption fit actual realizationsout-of-sample. Our focus on predictive densities acknowledges the possibility that, although some predictors can improve or deteriorate point forecasts, they might have theopposite effect on higher moments. We find that normality is rejected for most modelsin some dimension according to at least one of the tests we use. Interestingly, however,combinations of predictive densities appear to be correctly approximated by a normaldensity: the simple, equal average when predicting output growth and Bayesian modelaverage when predicting inflation.