918 resultados para dependent data
Resumo:
ABSTRACT: The ability of Antarctic krill Euphausia superba Dana to withstand the overwintering period is critical to their success. Laboratory evidence suggests that krill may shrink in body length during this time in response to the low availability of food. Nevertheless, verification that krill can shrink in the natural environment is lacking because winter data are difficult to obtain. One of the few sources of winter krill population data is from commercial vessels. We examined length-frequency data of adult krill (>35 mm total body length) obtained from commercial vessels in the Scotia-Weddell region and compared our results with those obtained from a combination of science and commercial sampling operations carried out in this region at other times of the year. Our analyses revealed body-length shrinkage in adult females but not males during overwinter, based on both the tracking of modal size classes over seasons and sex-ratio patterns. Other explanatory factors, such as differential mortality, immigration and emigration, could not explain the observed differences. The same pattern was also observed at South Georgia and in the Western Antarctic Peninsula. Fitted seasonally modulated von Bertalanffy growth functions predicted a pattern of overwintering shrinkage in all body-length classes of females, but only stagnation in growth in males. This shrinkage most likely reflects morphometric changes resulting from the contraction of the ovaries and is not necessarily an outcome of winter hardship. The sex-dependent changes that we observed need to be incorporated into life cycle and population dynamic models of this species, particularly those used in managing the fishery. KEY WORDS: Southern Ocean · Population dynamics · Production · Life cycle · Fishery
Resumo:
Activation triggers the exchange of subunits in Ca(2+)/calmodulin-dependent protein kinase II (CaMKII), an oligomeric enzyme that is critical for learning, memory, and cardiac function. The mechanism by which subunit exchange occurs remains elusive. We show that the human CaMKII holoenzyme exists in dodecameric and tetradecameric forms, and that the calmodulin (CaM)-binding element of CaMKII can bind to the hub of the holoenzyme and destabilize it to release dimers. The structures of CaMKII from two distantly diverged organisms suggest that the CaM-binding element of activated CaMKII acts as a wedge by docking at intersubunit interfaces in the hub. This converts the hub into a spiral form that can release or gain CaMKII dimers. Our data reveal a three-way competition for the CaM-binding element, whereby phosphorylation biases it towards the hub interface, away from the kinase domain and calmodulin, thus unlocking the ability of activated CaMKII holoenzymes to exchange dimers with unactivated ones.
Resumo:
Activation triggers the exchange of subunits in Ca(2+)/calmodulin-dependent protein kinase II (CaMKII), an oligomeric enzyme that is critical for learning, memory, and cardiac function. The mechanism by which subunit exchange occurs remains elusive. We show that the human CaMKII holoenzyme exists in dodecameric and tetradecameric forms, and that the calmodulin (CaM)-binding element of CaMKII can bind to the hub of the holoenzyme and destabilize it to release dimers. The structures of CaMKII from two distantly diverged organisms suggest that the CaM-binding element of activated CaMKII acts as a wedge by docking at intersubunit interfaces in the hub. This converts the hub into a spiral form that can release or gain CaMKII dimers. Our data reveal a three-way competition for the CaM-binding element, whereby phosphorylation biases it towards the hub interface, away from the kinase domain and calmodulin, thus unlocking the ability of activated CaMKII holoenzymes to exchange dimers with unactivated ones.
Resumo:
Calcium/calmodulin-dependent kinase kinase 2 (CaMKK2) has been implicated in a range of conditions and pathologies from prostate to hepatic cancer. Here, we describe the expression in Escherichia coli and the purification protocol for the following constructs: full-length CaMKK2 in complex with CaM, CaMKK2 'apo', CaMKK2 (165-501) in complex with CaM, and the CaMKK2 F267G mutant. The protocols described have been optimized for maximum yield and purity with minimal purification steps required and the proteins subsequently used to develop a fluorescence-based assay for drug binding to the kinase, "Using the fluorescent properties of STO-609 as a tool to assist structure-function analyses of recombinant CaMKK2"
Resumo:
In Germany the upscaling algorithm is currently the standard approach for evaluating the PV power produced in a region. This method involves spatially interpolating the normalized power of a set of reference PV plants to estimate the power production by another set of unknown plants. As little information on the performances of this method could be found in the literature, the first goal of this thesis is to conduct an analysis of the uncertainty associated to this method. It was found that this method can lead to large errors when the set of reference plants has different characteristics or weather conditions than the set of unknown plants and when the set of reference plants is small. Based on these preliminary findings, an alternative method is proposed for calculating the aggregate power production of a set of PV plants. A probabilistic approach has been chosen by which a power production is calculated at each PV plant from corresponding weather data. The probabilistic approach consists of evaluating the power for each frequently occurring value of the parameters and estimating the most probable value by averaging these power values weighted by their frequency of occurrence. Most frequent parameter sets (e.g. module azimuth and tilt angle) and their frequency of occurrence have been assessed on the basis of a statistical analysis of parameters of approx. 35 000 PV plants. It has been found that the plant parameters are statistically dependent on the size and location of the PV plants. Accordingly, separate statistical values have been assessed for 14 classes of nominal capacity and 95 regions in Germany (two-digit zip-code areas). The performances of the upscaling and probabilistic approaches have been compared on the basis of 15 min power measurements from 715 PV plants provided by the German distribution system operator LEW Verteilnetz. It was found that the error of the probabilistic method is smaller than that of the upscaling method when the number of reference plants is sufficiently large (>100 reference plants in the case study considered in this chapter). When the number of reference plants is limited (<50 reference plants for the considered case study), it was found that the proposed approach provides a noticeable gain in accuracy with respect to the upscaling method.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Data mining, as a heatedly discussed term, has been studied in various fields. Its possibilities in refining the decision-making process, realizing potential patterns and creating valuable knowledge have won attention of scholars and practitioners. However, there are less studies intending to combine data mining and libraries where data generation occurs all the time. Therefore, this thesis plans to fill such a gap. Meanwhile, potential opportunities created by data mining are explored to enhance one of the most important elements of libraries: reference service. In order to thoroughly demonstrate the feasibility and applicability of data mining, literature is reviewed to establish a critical understanding of data mining in libraries and attain the current status of library reference service. The result of the literature review indicates that free online data resources other than data generated on social media are rarely considered to be applied in current library data mining mandates. Therefore, the result of the literature review motivates the presented study to utilize online free resources. Furthermore, the natural match between data mining and libraries is established. The natural match is explained by emphasizing the data richness reality and considering data mining as one kind of knowledge, an easy choice for libraries, and a wise method to overcome reference service challenges. The natural match, especially the aspect that data mining could be helpful for library reference service, lays the main theoretical foundation for the empirical work in this study. Turku Main Library was selected as the case to answer the research question: whether data mining is feasible and applicable for reference service improvement. In this case, the daily visit from 2009 to 2015 in Turku Main Library is considered as the resource for data mining. In addition, corresponding weather conditions are collected from Weather Underground, which is totally free online. Before officially being analyzed, the collected dataset is cleansed and preprocessed in order to ensure the quality of data mining. Multiple regression analysis is employed to mine the final dataset. Hourly visits are the independent variable and weather conditions, Discomfort Index and seven days in a week are dependent variables. In the end, four models in different seasons are established to predict visiting situations in each season. Patterns are realized in different seasons and implications are created based on the discovered patterns. In addition, library-climate points are generated by a clustering method, which simplifies the process for librarians using weather data to forecast library visiting situation. Then the data mining result is interpreted from the perspective of improving reference service. After this data mining work, the result of the case study is presented to librarians so as to collect professional opinions regarding the possibility of employing data mining to improve reference services. In the end, positive opinions are collected, which implies that it is feasible to utilizing data mining as a tool to enhance library reference service.
Resumo:
The recent advent of new technologies has led to huge amounts of genomic data. With these data come new opportunities to understand biological cellular processes underlying hidden regulation mechanisms and to identify disease related biomarkers for informative diagnostics. However, extracting biological insights from the immense amounts of genomic data is a challenging task. Therefore, effective and efficient computational techniques are needed to analyze and interpret genomic data. In this thesis, novel computational methods are proposed to address such challenges: a Bayesian mixture model, an extended Bayesian mixture model, and an Eigen-brain approach. The Bayesian mixture framework involves integration of the Bayesian network and the Gaussian mixture model. Based on the proposed framework and its conjunction with K-means clustering and principal component analysis (PCA), biological insights are derived such as context specific/dependent relationships and nested structures within microarray where biological replicates are encapsulated. The Bayesian mixture framework is then extended to explore posterior distributions of network space by incorporating a Markov chain Monte Carlo (MCMC) model. The extended Bayesian mixture model summarizes the sampled network structures by extracting biologically meaningful features. Finally, an Eigen-brain approach is proposed to analyze in situ hybridization data for the identification of the cell-type specific genes, which can be useful for informative blood diagnostics. Computational results with region-based clustering reveals the critical evidence for the consistency with brain anatomical structure.
Resumo:
Observing system experiments (OSEs) are carried out over a 1-year period to quantify the impact of Argo observations on the Mercator Ocean 0.25° global ocean analysis and forecasting system. The reference simulation assimilates sea surface temperature (SST), SSALTO/DUACS (Segment Sol multi-missions dALTimetrie, d'orbitographie et de localisation précise/Data unification and Altimeter combination system) altimeter data and Argo and other in situ observations from the Coriolis data center. Two other simulations are carried out where all Argo and half of the Argo data are withheld. Assimilating Argo observations has a significant impact on analyzed and forecast temperature and salinity fields at different depths. Without Argo data assimilation, large errors occur in analyzed fields as estimated from the differences when compared with in situ observations. For example, in the 0–300 m layer RMS (root mean square) differences between analyzed fields and observations reach 0.25 psu and 1.25 °C in the western boundary currents and 0.1 psu and 0.75 °C in the open ocean. The impact of the Argo data in reducing observation–model forecast differences is also significant from the surface down to a depth of 2000 m. Differences between in situ observations and forecast fields are thus reduced by 20 % in the upper layers and by up to 40 % at a depth of 2000 m when Argo data are assimilated. At depth, the most impacted regions in the global ocean are the Mediterranean outflow, the Gulf Stream region and the Labrador Sea. A significant degradation can be observed when only half of the data are assimilated. Therefore, Argo observations matter to constrain the model solution, even for an eddy-permitting model configuration. The impact of the Argo floats' data assimilation on other model variables is briefly assessed: the improvement of the fit to Argo profiles do not lead globally to unphysical corrections on the sea surface temperature and sea surface height. The main conclusion is that the performance of the Mercator Ocean 0.25° global data assimilation system is heavily dependent on the availability of Argo data.
Resumo:
Data files to accompany the article in Nature Communications, in press.
Resumo:
International audience
Resumo:
The analysis of the wind flow around buildings has a great interest from the point of view of the wind energy assessment, pollutant dispersion control, natural ventilation and pedestrians wind comfort and safety. Since LES turbulence models are computationally time consuming when applied to real geometries, RANS models are still widely used. However, RANS models are very sensitive to the chosen turbulence parametrisation and the results can vary according to the application. In this investigation, the simulation of the wind flow around an isolated building is performed using various types of RANS turbulence models in the open source code OpenFOAM, and the results are compared with benchmark experimental data. In order to confirm the numerical accuracy of the simulations, a grid dependency analysis is performed and the convergence index and rate are calculated. Hit rates are calculated for all the cases and the models that successfully pass a validation criterion are analysed at different regions of the building roof, and the most accurate RANS models for the modelling of the flow at each region are identified. The characteristics of the wind flow at each region are also analysed from the point of view of the wind energy generation, and the most adequate wind turbine model for the wind energy exploitation at each region of the building roof is chosen.
Resumo:
Rainflow counting methods convert a complex load time history into a set of load reversals for use in fatigue damage modeling. Rainflow counting methods were originally developed to assess fatigue damage associated with mechanical cycling where creep of the material under load was not considered to be a significant contributor to failure. However, creep is a significant factor in some cyclic loading cases such as solder interconnects under temperature cycling. In this case, fatigue life models require the dwell time to account for stress relaxation and creep. This study develops a new version of the multi-parameter rainflow counting algorithm that provides a range-based dwell time estimation for use with time-dependent fatigue damage models. To show the applicability, the method is used to calculate the life of solder joints under a complex thermal cycling regime and is verified by experimental testing. An additional algorithm is developed in this study to provide data reduction in the results of the rainflow counting. This algorithm uses a damage model and a statistical test to determine which of the resultant cycles are statistically insignificant to a given confidence level. This makes the resulting data file to be smaller, and for a simplified load history to be reconstructed.
Resumo:
Purpose: To evaluate the effect of triptolide on the induction of cell apoptosis in human gastric cancer BGC-823 cells. Methods: The cytotoxicity of triptolide was evaluated by 3-(4, 5-dimethylthiazol-2-yl)-2, 5- diphenyltetrazolium bromide (MTT) assay. The effect of triptolide on cell proliferation was measured using lactate dehydrogenase (LDH) assay. Cell apoptosis was determined by Annexin V/propidium iodide (PI) double-staining assay. Results: MTT results indicate that triptolide significantly decreased cancer cell numbers in dose- and time-dependent manners in MTT assay. Data from LDH assay showed that triptolide markedly induced cytotoxicity in gastric cancer cells. Triptolide also remarkably induced both early and late apoptotic process in BGC-823 cells. In addition, the compound down-regulated the expression of anti-apoptotic Bcell lymphoma-2 (bcl-2) and up-regulated the expression of pro-apoptotic BCL-2-associated X (bax) in a dose-dependent manner. Furthermore, the pro-apoptotic activity of triptolide was involved in the activation of caspase-3 pathway in BGC-823 cells. Conclusion: Taken together, the findings strongly indicates that the pro-apoptotic activity of triptolide is regulated by caspase 3-dependent cascade pathway, and thus needs to be further developed for cancer therapy.
Resumo:
The accuracy of a map is dependent on the reference dataset used in its construction. Classification analyses used in thematic mapping can, for example, be sensitive to a range of sampling and data quality concerns. With particular focus on the latter, the effects of reference data quality on land cover classifications from airborne thematic mapper data are explored. Variations in sampling intensity and effort are highlighted in a dataset that is widely used in mapping and modelling studies; these may need accounting for in analyses. The quality of the labelling in the reference dataset was also a key variable influencing mapping accuracy. Accuracy varied with the amount and nature of mislabelled training cases with the nature of the effects varying between classifiers. The largest impacts on accuracy occurred when mislabelling involved confusion between similar classes. Accuracy was also typically negatively related to the magnitude of mislabelled cases and the support vector machine (SVM), which has been claimed to be relatively insensitive to training data error, was the most sensitive of the set of classifiers investigated, with overall classification accuracy declining by 8% (significant at 95% level of confidence) with the use of a training set containing 20% mislabelled cases.