89 resultados para LEED STRUCTURE-ANALYSIS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Podeu consultar el llibre complet a: http://hdl.handle.net/2445/32166

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Amino acid tandem repeats, also called homopolymeric tracts, are extremely abundant in eukaryotic proteins. To gain insight into the genome-wide evolution of these regions in mammals, we analyzed the repeat content in a large data set of rat-mouse-human orthologs. Our results show that human proteins contain more amino acid repeats than rodent proteins and that trinucleotide repeats are also more abundant in human coding sequences. Using the human species as an outgroup, we were able to address differences in repeat loss and repeat gain in the rat and mouse lineages. In this data set, mouse proteins contain substantially more repeats than rat proteins, which can be at least partly attributed to a higher repeat loss in the rat lineage. The data are consistent with a role for trinucleotide slippage in the generation of novel amino acid repeats. We confirm the previously observed functional bias of proteins with repeats, with overrepresentation of transcription factors and DNA-binding proteins. We show that genes encoding amino acid repeats tend to have an unusually high GC content, and that differences in coding GC content among orthologs are directly related to the presence/absence of repeats. We propose that the different GC content isochore structure in rodents and humans may result in an increased amino acid repeat prevalence in the human lineage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The factor structure of a back translated Spanish version (Lega, Caballo and Ellis, 2002) of the Attitudes and Beliefs Inventory (ABI) (Burgess, 1990) is analyzed in a sample of 250 university students.The Spanish version of the ABI is a 48-items self-report inventory using a 5-point Likert scale that assesses rational and irrational attitudes and beliefs. 24-items cover two dimensions of irrationality: a) areas of content (3 subscales), and b) styles of thinking (4 subscales).An Exploratory Factor Analysis (Parallel Analysis with Unweighted Least Squares method and Promin rotation) was performed with the FACTOR 9.20 software (Lorenzo-Seva and Ferrando, 2013).The results reproduced the main four styles of irrational thinking in relation with the three specific contents of irrational beliefs. However, two factors showed a complex configuration with important cross-loadings of different items in content and style. More analyses are needed to review the specific content and style of such items.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The final year project came to us as an opportunity to get involved in a topic which has appeared to be attractive during the learning process of majoring in economics: statistics and its application to the analysis of economic data, i.e. econometrics.Moreover, the combination of econometrics and computer science is a very hot topic nowadays, given the Information Technologies boom in the last decades and the consequent exponential increase in the amount of data collected and stored day by day. Data analysts able to deal with Big Data and to find useful results from it are verydemanded in these days and, according to our understanding, the work they do, although sometimes controversial in terms of ethics, is a clear source of value added both for private corporations and the public sector. For these reasons, the essence of this project is the study of a statistical instrument valid for the analysis of large datasets which is directly related to computer science: Partial Correlation Networks.The structure of the project has been determined by our objectives through the development of it. At first, the characteristics of the studied instrument are explained, from the basic ideas up to the features of the model behind it, with the final goal of presenting SPACE model as a tool for estimating interconnections in between elements in large data sets. Afterwards, an illustrated simulation is performed in order to show the power and efficiency of the model presented. And at last, the model is put into practice by analyzing a relatively large data set of real world data, with the objective of assessing whether the proposed statistical instrument is valid and useful when applied to a real multivariate time series. In short, our main goals are to present the model and evaluate if Partial Correlation Network Analysis is an effective, useful instrument and allows finding valuable results from Big Data.As a result, the findings all along this project suggest the Partial Correlation Estimation by Joint Sparse Regression Models approach presented by Peng et al. (2009) to work well under the assumption of sparsity of data. Moreover, partial correlation networks are shown to be a very valid tool to represent cross-sectional interconnections in between elements in large data sets.The scope of this project is however limited, as there are some sections in which deeper analysis would have been appropriate. Considering intertemporal connections in between elements, the choice of the tuning parameter lambda, or a deeper analysis of the results in the real data application are examples of aspects in which this project could be completed.To sum up, the analyzed statistical tool has been proved to be a very useful instrument to find relationships that connect the elements present in a large data set. And after all, partial correlation networks allow the owner of this set to observe and analyze the existing linkages that could have been omitted otherwise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Single-stranded DNA (ssDNA) plays a major role in several biological processes. It is therefore of fundamental interest to understand how the elastic response and the formation of secondary structures are modulated by the interplay between base pairing and electrostatic interactions. Here we measure force-extension curves (FECs) of ssDNA molecules in optical tweezers set up over two orders of magnitude of monovalent and divalent salt conditions, and obtain its elastic parameters by fitting the FECs to semiflexible models of polymers. For both monovalent and divalent salts, we find that the electrostatic contribution to the persistence length is proportional to the Debye screening length, varying as the inverse of the square root of cation concentration. The intrinsic persistence length is equal to 0.7 nm for both types of salts, and the effectivity of divalent cations in screening electrostatic interactions appears to be 100-fold as compared with monovalent salt, in line with what has been recently reported for single-stranded RNA. Finally, we propose an analysis of the FECs using a model that accounts for the effective thickness of the filament at low salt condition and a simple phenomenological description that quantifies the formation of non-specific secondary structure at low forces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The factor structure of a back translated Spanish version (Lega, Caballo and Ellis, 2002) of the Attitudes and Beliefs Inventory (ABI) (Burgess, 1990) is analyzed in a sample of 250 university students.The Spanish version of the ABI is a 48-items self-report inventory using a 5-point Likert scale that assesses rational and irrational attitudes and beliefs. 24-items cover two dimensions of irrationality: a) areas of content (3 subscales), and b) styles of thinking (4 subscales).An Exploratory Factor Analysis (Parallel Analysis with Unweighted Least Squares method and Promin rotation) was performed with the FACTOR 9.20 software (Lorenzo-Seva and Ferrando, 2013).The results reproduced the main four styles of irrational thinking in relation with the three specific contents of irrational beliefs. However, two factors showed a complex configuration with important cross-loadings of different items in content and style. More analyses are needed to review the specific content and style of such items.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Analysing the observed differences for incidence or mortality of a particular disease between two different situations (such as time points, geographical areas, gender or other social characteristics) can be useful both for scientific or administrative purposes. From an epidemiological and public health point of view, it is of great interest to assess the effect of demographic factors in these observed differences in order to elucidate the effect of the risk of developing a disease or dying from it. The method proposed by Bashir and Estève, which splits the observed variation into three components: risk, population structure and population size is a common choice at practice. Results A web-based application, called RiskDiff has been implemented (available at http://rht.iconcologia.net/riskdiff.htm webcite), to perform this kind of statistical analyses, providing text and graphical summaries. Code from the implemented functions in R is also provided. An application to cancer mortality data from Catalonia is used for illustration. Conclusions Combining epidemiological with demographical factors is crucial for analysing incidence or mortality from a disease, especially if the population pyramids show substantial differences. The tool implemented may serve to promote and divulgate the use of this method to give advice for epidemiologic interpretation and decision making in public health.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissolved organic matter (DOM) is a complex mixture of organic compounds, ubiquitous in marine and freshwater systems. Fluorescence spectroscopy, by means of Excitation-Emission Matrices (EEM), has become an indispensable tool to study DOM sources, transport and fate in aquatic ecosystems. However the statistical treatment of large and heterogeneous EEM data sets still represents an important challenge for biogeochemists. Recently, Self-Organising Maps (SOM) has been proposed as a tool to explore patterns in large EEM data sets. SOM is a pattern recognition method which clusterizes and reduces the dimensionality of input EEMs without relying on any assumption about the data structure. In this paper, we show how SOM, coupled with a correlation analysis of the component planes, can be used both to explore patterns among samples, as well as to identify individual fluorescence components. We analysed a large and heterogeneous EEM data set, including samples from a river catchment collected under a range of hydrological conditions, along a 60-km downstream gradient, and under the influence of different degrees of anthropogenic impact. According to our results, chemical industry effluents appeared to have unique and distinctive spectral characteristics. On the other hand, river samples collected under flash flood conditions showed homogeneous EEM shapes. The correlation analysis of the component planes suggested the presence of four fluorescence components, consistent with DOM components previously described in the literature. A remarkable strength of this methodology was that outlier samples appeared naturally integrated in the analysis. We conclude that SOM coupled with a correlation analysis procedure is a promising tool for studying large and heterogeneous EEM data sets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Life cycle analysis (LCA) is a comprehensive method for assessing the environmental impact of a product or an activity over its entire life cycle. The purpose of conducting LCA studies varies from one application to another. Different applications use LCA for different purposes. In general, the main aim of using LCA is to reduce the environmental impact of products through guiding the decision making process towards more sustainable solutions. The most critical phase in an LCA study is the Life Cycle Impact Assessment (LCIA) where the life cycle inventory (LCI) results of the considered substances related to the study of a certain system are transformed into understandable impact categories that represent the impact on the environment. In this research work, a general structure clarifying the steps that shall be followed ir order to conduct an LCA study effectively is presented. These steps are based on the ISO 14040 standard framework. In addition, a survey is done on the most widely used LCIA methodologies. Recommendations about possible developments and suggetions for further research work regarding the use of LCA and LCIA methodologies are discussed as well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present research we have set forth a new, simple, Trade-Off model that would allow us to calculate how much debt and, by default, how much equity a company should have, using easily available information and calculating the cost of debt dynamically on the basis of the effect that the capital structure of the company has on the risk of bankruptcy; in an attempt to answer this question. The proposed model has been applied to the companies that make up the Dow Jones Industrial Average (DJIA) in 2007. We have used consolidated financial data from 1996 to 2006, published by Bloomberg. We have used simplex optimization method to find the debt level that maximizes firm value. Then, we compare the estimated debt with real debt of companies using statistical nonparametric Mann-Whitney. The results indicate that 63% of companies do not show a statistically significant difference between the real and the estimated debt.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, the population structure of the white grunt (Haemulon plumieri) from the northern coast of the Yucatan Peninsula was determined through an otolith shape analysis based on the samples collected in three locations: Celestún (N 20°49",W 90°25"), Dzilam (N 21°23", W 88°54") and Cancún (N 21°21",W 86°52"). The otolith outline was based on the elliptic Fourier descriptors, which indicated that the H. plumieri population in the northern coast of the Yucatan Peninsula is composed of three geographically delimited units (Celestún, Dzilam, and Cancún). Significant differences were observed in mean otolith shapes among all samples (PERMANOVA; F2, 99 = 11.20, P = 0.0002), and the subsequent pairwise comparisons showed that all samples were significantly differently from each other. Samples do not belong to a unique white grunt population, and results suggest that they might represent a structured population along the northern coast of the Yucatan Peninsula

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spatio-temporal variability in settlement and recruitment, high mortality during the first life-history stages, and selection may determine the genetic structure of cohorts of long-lived marine invertebrates at small scales. We conducted a spatial and temporal analysis of the common Mediterranean Sea urchin Paracentrotus lividus to determine the genetic structure of cohorts at different scales. In Tossa de Mar (NW Mediterranean), recruitment was followed over 5 consecutive springs (2006-2010). In spring 2008, recruits and two-year-old individuals were collected at 6 locations along East and South Iberian coasts separated from 200 to over 1,100 km. All cohorts presented a high genetic diversity based on a fragment of mtCOI. Our results showed a marked genetic homogeneity in the temporal monitoring and a low degree of spatial structure in 2006. In 2008, coupled with an abnormality in the usual circulation patterns in the area, the genetic structure of the southern populations studied changed markedly, with arrival of many private haplotypes. This fact highlights the importance of point events in renewing the genetic makeup of populations, which can only be detected through analysis of the cohort structure coupling temporal and spatial perspectives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the goals of psychological assessment focuses on the adaptation of its instruments to different populations. The objective of this study is to establish the psychometric properties and dimensional structure of the Spanish version of the Coping Responses Inventory- Adult Form (CRI-Adult, Moos, 1993). The following criteria were analyzed: a) descriptive statistics; b) internal consistency reliability (Cronbach"s alpha, and intercorrelations between scales); c) test-retest reliability (4-week interval); d) dimensionality of CRI-Adult (exploratory factor analysis); e) construct validity (confirmatory factor analysis); f) convergent criterion validity (correlations between CRI-Adult and Coping Strategies Indicator, CSI, Amirkhan, 1990), and g) predictive criterion validity (correlations between CRI-Adult, and SCL-90-R, Derogatis, 1983). The results, obtained with 800 adults from Barcelona and surrounding area (334 men and 466 women, aged between 18 to 76 years) indicate that the Spanish version of CRIAdult has satisfactory psychometric properties that allow using this test with guarantee.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study is to confirm the factorial structure of the Identification-Commitment Inventory (ICI) developed within the frame of the Human System Audit (HSA) (Quijano et al. in Revist Psicol Soc Apl 10(2):27-61, 2000; Pap Psicól Revist Col Of Psicó 29:92-106, 2008). Commitment and identification are understood by the HSA at an individual level as part of the quality of human processes and resources in an organization; and therefore as antecedents of important organizational outcomes, such as personnel turnover intentions, organizational citizenship behavior, etc. (Meyer et al. in J Org Behav 27:665-683, 2006). The theoretical integrative model which underlies ICI Quijano et al. (2000) was tested in a sample (N = 625) of workers in a Spanish public hospital. Confirmatory factor analysis through structural equation modeling was performed. Elliptical least square solution was chosen as estimator procedure on account of non-normal distribution of the variables. The results confirm the goodness of fit of an integrative model, which underlies the relation between Commitment and Identification, although each one is operatively different.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Flood simulation studies use spatial-temporal rainfall data input into distributed hydrological models. A correct description of rainfall in space and in time contributes to improvements on hydrological modelling and design. This work is focused on the analysis of 2-D convective structures (rain cells), whose contribution is especially significant in most flood events. The objective of this paper is to provide statistical descriptors and distribution functions for convective structure characteristics of precipitation systems producing floods in Catalonia (NE Spain). To achieve this purpose heavy rainfall events recorded between 1996 and 2000 have been analysed. By means of weather radar, and applying 2-D radar algorithms a distinction between convective and stratiform precipitation is made. These data are introduced and analyzed with a GIS. In a first step different groups of connected pixels with convective precipitation are identified. Only convective structures with an area greater than 32 km2 are selected. Then, geometric characteristics (area, perimeter, orientation and dimensions of the ellipse), and rainfall statistics (maximum, mean, minimum, range, standard deviation, and sum) of these structures are obtained and stored in a database. Finally, descriptive statistics for selected characteristics are calculated and statistical distributions are fitted to the observed frequency distributions. Statistical analyses reveal that the Generalized Pareto distribution for the area and the Generalized Extreme Value distribution for the perimeter, dimensions, orientation and mean areal precipitation are the statistical distributions that best fit the observed ones of these parameters. The statistical descriptors and the probability distribution functions obtained are of direct use as an input in spatial rainfall generators.