33 resultados para direct comparison method
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
The quantitative estimation of Sea Surface Temperatures from fossils assemblages is afundamental issue in palaeoclimatic and paleooceanographic investigations. TheModern Analogue Technique, a widely adopted method based on direct comparison offossil assemblages with modern coretop samples, was revised with the aim ofconforming it to compositional data analysis. The new CODAMAT method wasdeveloped by adopting the Aitchison metric as distance measure. Modern coretopdatasets are characterised by a large amount of zeros. The zero replacement was carriedout by adopting a Bayesian approach to the zero replacement, based on a posteriorestimation of the parameter of the multinomial distribution. The number of modernanalogues from which reconstructing the SST was determined by means of a multipleapproach by considering the Proxies correlation matrix, Standardized Residual Sum ofSquares and Mean Squared Distance. This new CODAMAT method was applied to theplanktonic foraminiferal assemblages of a core recovered in the Tyrrhenian Sea.Kew words: Modern analogues, Aitchison distance, Proxies correlation matrix,Standardized Residual Sum of Squares
Resumo:
In this article we compare regression models obtained to predict PhD students’ academic performance in the universities of Girona (Spain) and Slovenia. Explanatory variables are characteristics of PhD student’s research group understood as an egocentered social network, background and attitudinal characteristics of the PhD students and some characteristics of the supervisors. Academic performance was measured by the weighted number of publications. Two web questionnaires were designed, one for PhD students and one for their supervisors and other research group members. Most of the variables were easily comparable across universities due to the careful translation procedure and pre-tests. When direct comparison was notpossible we created comparable indicators. We used a regression model in which the country was introduced as a dummy coded variable including all possible interaction effects. The optimal transformations of the main and interaction variables are discussed. Some differences between Slovenian and Girona universities emerge. Some variables like supervisor’s performance and motivation for autonomy prior to starting the PhD have the same positive effect on the PhD student’s performance in both countries. On the other hand, variables like too close supervision by the supervisor and having children have a negative influence in both countries. However, we find differences between countries when we observe the motivation for research prior to starting the PhD which increases performance in Slovenia but not in Girona. As regards network variables, frequency of supervisor advice increases performance in Slovenia and decreases it in Girona. The negative effect in Girona could be explained by the fact that additional contacts of the PhD student with his/her supervisor might indicate a higher workload in addition to or instead of a better advice about the dissertation. The number of external student’s advice relationships and social support mean contact intensity are not significant in Girona, but they have a negative effect in Slovenia. We might explain the negative effect of external advice relationships in Slovenia by saying that a lot of external advice may actually result from a lack of the more relevant internal advice
Resumo:
Background: Current methodology of gene expression analysis limits the possibilities of comparison between cells/tissues of organs in which cell size and/or number changes as a consequence of the study (e.g. starvation). A method relating the abundance of specific mRNA copies per cell may allow direct comparison or different organs and/or changing physiological conditions. Methods: With a number of selected genes, we analysed the relationship of the number of bases and the fluorescence recorded at a present level using cDNA standards. A lineal relationship was found between the final number of bases and the length of the transcript. The constants of this equation and those of the relationship between fluorescence and number of bases in cDNA were determined and a general equation linking the length of the transcript and the initial number of copies of mRNA was deduced for a given pre-established fluorescence setting. This allowed the calculation of the concentration of the corresponding mRNAs per g of tissue. The inclusion of tissue RNA and the DNA content per cell, allowed the calculation of the mRNA copies per cell. Results: The application of this procedure to six genes: Arbp, cyclophilin, ChREBP, T4 deiodinase 2, acetyl-CoA carboxylase 1 and IRS-1, in liver and retroperitoneal adipose tissue of food-restricted rats allowed precise measures of their changes irrespective of the shrinking of the tissue, the loss of cells or changes in cell size, factors that deeply complicate the comparison between changing tissue conditions. The percentage results obtained with the present methods were essentially the same obtained with the delta-delta procedure and with individual cDNA standard curve quantitative RT-PCR estimation. Conclusion: The method presented allows the comparison (i.e. as copies of mRNA per cell) between different genes and tissues, establishing the degree of abundance of the different molecular species tested.
Resumo:
The productive characteristics of migrating individuals, emigrant selection, affect welfare. The empirical estimation of the degree of selection suffers from a lack of complete and nationally representative data. This paper uses a new and better dataset to address both issues: the ENET (Mexican Labor Survey), which identifies emigrants right before they leave and allows a direct comparison to non-migrants. This dataset presents a relevant dichotomy: it shows on average negative selection for Mexican emigrants to the United States for the period 2000-2004 together with positive selection in Mexican emigration out of rural Mexico to the United States in the same period. Three theories that could explain this dichotomy are tested. Whereas higher skill prices in Mexico than in the US are enough to explain negative selection in urban Mexico, its combination with network effects and wealth constraints is required to account for positive selection in rural Mexico.
Resumo:
This paper examines the extent to which Mexican emigrants to the United States are negatively selected, that is, have lower skills than individuals who remain in Mexico. Previous studies have been limited by the lack of nationally representative longitudinal data. This one uses a newly available household survey, which identifies emigrants before they leave and allows a direct comparison to non-migrants. I find that, on average, US bound Mexican emigrants from 2000 to 2004 earn a lower wage and have less schooling years than individuals who remain in Mexico, evidence of negative selection. This supports the original hypothesis of Borjas (AER, 1987) and argues against recent findings, notably those of Chiquiar and Hanson (JPE, 2005). The discrepancy with the latter is primarily due to an under-count of unskilled migrants in US sources and secondarily to the omission of unobservables in their methodology.
Resumo:
Projecte de recerca elaborat a partir d’una estada a la University of Groningen, Holanda, entre 2007 i 2009. La simulació directa de la turbulència (DNS) és una eina clau dins de la mecànica de fluids computacional. Per una banda permet conèixer millor la física de la turbulència i per l'altra els resultats obtinguts són claus per el desenvolupament dels models de turbulència. No obstant, el DNS no és una tècnica vàlida per a la gran majoria d'aplicacions industrials degut al elevats costos computacionals. Per tant, és necessari cert grau de modelització de la turbulència. En aquest context, s'han introduïts importants millores basades en la modelització del terme convectiu (no lineal) emprant symmetry-preserving regularizations. En tracta de modificar adequadament el terme convectiu a fi de reduir la producció d'escales més i més petites (vortex-stretching) tot mantenint tots els invariants de les equacions originals. Fins ara, aquest models s'han emprat amb èxit per nombres de Rayleigh (Ra) relativament elevats. En aquest punt, disposar de resultats DNS per a configuracions més complexes i nombres de Ra més elevats és clau. En aquest contexte, s'han dut a terme simulacions DNS en el supercomputador MareNostrum d'una Differentially Heated Cavity amb Ra=1e11 i Pr=0.71 durant el primer any dels dos que consta el projecte. A més a més, s'ha adaptat el codi a fi de poder simular el fluxe al voltant d'un cub sobre una pared amb Re=10000. Aquestes simulacions DNS són les més grans fetes fins ara per aquestes configuracions i la seva correcta modelització és un gran repte degut la complexitat dels fluxes. Aquestes noves simulacions DNS estan aportant nous coneixements a la física de la turbulència i aportant resultats indispensables per al progrés de les modelitzacións tipus symmetry-preserving regularization.
Resumo:
Didactic knowledge about contents is constructed through an idiosyncratic synthesis between knowledge about the subject area, students' general pedagogical knowledge and the teacher's biography. This study aimed to understand the construction process and the sources of Pedagogical Content Knowledge, as well as to analyze its manifestations and variations in interactive teaching by teachers whom the students considered competent. Data collection involved teachers from an undergraduate nursing program in the South of Brazil, through non-participant observation and semistructured interviews. Data analysis was submitted to the constant comparison method. The results disclose the need for initial education to cover pedagogical aspects for nurses; to assume permanent education as fundamental in view of the complexity of contents and teaching; to use mentoring/monitoring and the value learning with experienced teachers with a view to the development of quality teaching.
Resumo:
An implicitly parallel method for integral-block driven restricted active space self-consistent field (RASSCF) algorithms is presented. The approach is based on a model space representation of the RAS active orbitals with an efficient expansion of the model subspaces. The applicability of the method is demonstrated with a RASSCF investigation of the first two excited states of indole
Resumo:
The impact of personality and job characteristics on parental rearing styles was compared in 353 employees. Hypotheses concerning the relationships between personality and job variables were formulated in accordance with findings in past research and the Belsky’s model (1984). Structural equation nested models showed that Aggression-hostility, Sociability and job Demand were predictive of Rejection and Emotional Warmth parenting styles, providing support for some of the hypothesized relationships. The findings suggest a well-balanced association of personality variables with both parenting styles: Aggression-Hostility was positively related to Rejection and negatively to Emotional Warmth, whereas Sociability was positively related to Emotional Warmth and negatively related to Rejection. Personality dimensions explained a higher amount of variance in observed parenting styles. However, a model that considered both, personality and job dimensions as antecedent variables of parenting was the best representation of observed data, as both systems play a role in the prediction of parenting behavior.
Resumo:
Identification of CD8+ cytotoxic T lymphocyte (CTL) epitopes has traditionally relied upon testing of overlapping peptide libraries for their reactivity with T cells in vitro. Here, we pursued deep ligand sequencing (DLS) as an alternative method of directly identifying those ligands that are epitopes presented to CTLs by the class I human leukocyte antigens (HLA) of infected cells. Soluble class I HLA-A*11:01 (sHLA) was gathered from HIV-1 NL4-3-infected human CD4+ SUP-T1 cells. HLA-A*11:01 harvested from infected cells was immunoaffinity purified and acid boiled to release heavy and light chains from peptide ligands that were then recovered by size-exclusion filtration. The ligands were first fractionated by high-pH high-pressure liquid chromatography and then subjected to separation by nano-liquid chromatography (nano-LC)–mass spectrometry (MS) at low pH. Approximately 10 million ions were selected for sequencing by tandem mass spectrometry (MS/MS). HLA-A*11:01 ligand sequences were determined with PEAKS software and confirmed by comparison to spectra generated from synthetic peptides. DLS identified 42 viral ligands presented by HLA-A*11:01, and 37 of these were previously undetected. These data demonstrate that (i) HIV-1 Gag and Nef are extensively sampled, (ii) ligand length variants are prevalent, particularly within Gag and Nef hot spots where ligand sequences overlap, (iii) noncanonical ligands are T cell reactive, and (iv) HIV-1 ligands are derived from de novo synthesis rather than endocytic sampling. Next-generation immunotherapies must factor these nascent HIV-1 ligand length variants and the finding that CTL-reactive epitopes may be absent during infection of CD4+ T cells into strategies designed to enhance T cell immunity.
Resumo:
Recent concessions in France and in the US have resulted in a dramatic difference in the valuation placed on the toll roads; the price paid by the investors in France was twelve times current cash flow whereas investors paid sixty times current cash flow for the U.S. toll roads. In this paper we explore two questions: What accounts for the difference in these multiples, and what are the implications with respect to the public interest. Our analysis illustrates how structural and procedural decisions made by the public owner affect the concession price. Further, the terms of the concession have direct consequences that are enjoyed or borne by the various stakeholders of the toll road.
Resumo:
Report for the scientific sojourn at the Swiss Federal Institute of Technology Zurich, Switzerland, between September and December 2007. In order to make robots useful assistants for our everyday life, the ability to learn and recognize objects is of essential importance. However, object recognition in real scenes is one of the most challenging problems in computer vision, as it is necessary to deal with difficulties. Furthermore, in mobile robotics a new challenge is added to the list: computational complexity. In a dynamic world, information about the objects in the scene can become obsolete before it is ready to be used if the detection algorithm is not fast enough. Two recent object recognition techniques have achieved notable results: the constellation approach proposed by Lowe and the bag of words approach proposed by Nistér and Stewénius. The Lowe constellation approach is the one currently being used in the robot localization project of the COGNIRON project. This report is divided in two main sections. The first section is devoted to briefly review the currently used object recognition system, the Lowe approach, and bring to light the drawbacks found for object recognition in the context of indoor mobile robot navigation. Additionally the proposed improvements for the algorithm are described. In the second section the alternative bag of words method is reviewed, as well as several experiments conducted to evaluate its performance with our own object databases. Furthermore, some modifications to the original algorithm to make it suitable for object detection in unsegmented images are proposed.
Resumo:
Lean meat percentage (LMP) is the criterion for carcass classification and it must be measured on line objectively. The aim of this work was to compare the error of the prediction (RMSEP) of the LMP measured with the following different devices: Fat-O-Meat’er (FOM), UltraFOM (UFOM), AUTOFOM and -VCS2000. For this reason the same 99 carcasses were measured using all 4 apparatus and dissected according to the European Reference Method. Moreover a subsample of the carcasses (n=77) were fully scanned with a X-ray Computed Tomography equipment (CT). The RMSEP calculated with cross validation leave-one-out was lower for FOM and AUTOFOM (1.8% and 1.9%, respectively) and higher for UFOM and VCS2000 (2.3% for both devices). The error obtained with CT was the lowest (0.96%) in accordance with previous results, but CT cannot be used on line. It can be concluded that FOM and AUTOFOM presented better accuracy than UFOM and VCS2000.
Resumo:
Report for the scientific sojourn at the the Philipps-Universität Marburg, Germany, from september to december 2007. For the first, we employed the Energy-Decomposition Analysis (EDA) to investigate aromaticity on Fischer carbenes as it is related through all the reaction mechanisms studied in my PhD thesis. This powerful tool, compared with other well-known aromaticity indices in the literature like NICS, is useful not only for quantitative results but also to measure the degree of conjugation or hyperconjugation in molecules. Our results showed for the annelated benzenoid systems studied here, that electron density is more concentrated on the outer rings than in the central one. The strain-induced bond localization plays a major role as a driven force to keep the more substituted ring as the less aromatic. The discussion presented in this work was contrasted at different levels of theory to calibrate the method and ensure the consistency of our results. We think these conclusions can also be extended to arene chemistry for explaining aromaticity and regioselectivity reactions found in those systems.In the second work, we have employed the Turbomole program package and density-functionals of the best performance in the state of art, to explore reaction mechanisms in the noble gas chemistry. Particularly, we were interested in compounds of the form H--Ng--Ng--F (where Ng (Noble Gas) = Ar, Kr and Xe) and we investigated the relative stability of these species. Our quantum chemical calculations predict that the dixenon compound HXeXeF has an activation barrier for decomposition of 11 kcal/mol which should be large enough to identify the molecule in a low-temperature matrix. The other noble gases present lower activation barriers and therefore are more labile and difficult to be observable systems experimentally.
Resumo:
The two main alternative methods used to identify key sectors within the input-output approach, the Classical Multiplier method (CMM) and the Hypothetical Extraction method (HEM), are formally and empirically compared in this paper. Our findings indicate that the main distinction between the two approaches stems from the role of the internal effects. These internal effects are quantified under the CMM while under the HEM only external impacts are considered. In our comparison, we find, however that CMM backward measures are more influenced by within-block effects than the proposed forward indices under this approach. The conclusions of this comparison allow us to develop a hybrid proposal that combines these two existing approaches. This hybrid model has the advantage of making it possible to distinguish and disaggregate external effects from those that a purely internal. This proposal has also an additional interest in terms of policy implications. Indeed, the hybrid approach may provide useful information for the design of ''second best'' stimulus policies that aim at a more balanced perspective between overall economy-wide impacts and their sectoral distribution.