958 resultados para subset consistency
Resumo:
High-grade Brainstem Glioma (BSG), also known as Diffuse Intrinsic Pontine Glioma (DIPG), is an incurable pediatric brain cancer. Increasing evidence supports the existence of regional differences in gliomagenesis such that BSG is considered a distinct disease from glioma of the cerebral cortex (CG). In an effort to elucidate unique characteristics of BSG, we conducted expression analysis of mouse PDGF-B-driven BSG and CG initiated in Nestin progenitor cells and identified a short list of expression changes specific to the brainstem gliomagenesis process, including abnormal upregulation of paired box 3 (Pax3). In the neonatal mouse brain, Pax3 expression marks a subset of brainstem progenitor cells, while it is absent from the cerebral cortex, mirroring its regional expression in glioma. Ectopic expression of Pax3 in normal brainstem progenitors in vitro shows that Pax3 inhibits apoptosis. Pax3-induced inhibition of apoptosis is p53-dependent, however, and in the absence of p53, Pax3 promotes proliferation of brainstem progenitors. In vivo, Pax3 enhances PDGF-B-driven gliomagenesis by shortening tumor latency and increasing tumor penetrance and grade, in a region-specific manner, while loss of Pax3 function extends survival of PDGF-B-driven;p53-deficient BSG-bearing mice by 33%. Importantly, Pax3 is regionally expressed in human glioma as well, with high PAX3 mRNA characterizing 40% of human BSG, revealing a subset of tumors that significantly associates with PDGFRA alterations, amplifications of cell cycle regulatory genes, and is exclusive of ACVR1 mutations. Collectively, these data suggest that regional Pax3 expression not only marks a novel subset of BSG but also contributes to PDGF-B-induced brainstem gliomagenesis.
Resumo:
The mechanisms responsible for increased cardiovascular risk associated with HIV-1 infection are incompletely defined. Using flow cytometry, in the present study, we examined activation phenotypes of monocyte subpopulations in patients with HIV-1 infection or acute coronary syndrome to find common cellular profiles. Nonclassic (CD14(+)CD16(++)) and intermediate (CD14(++)CD16(+)) monocytes are proportionally increased and express high levels of tissue factor and CD62P in HIV-1 infection. These proportions are related to viremia, T-cell activation, and plasma levels of IL-6. In vitro exposure of whole blood samples from uninfected control donors to lipopolysaccharide increased surface tissue factor expression on all monocyte subsets, but exposure to HIV-1 resulted in activation only of nonclassic monocytes. Remarkably, the profile of monocyte activation in uncontrolled HIV-1 disease mirrors that of acute coronary syndrome in uninfected persons. Therefore, drivers of immune activation and inflammation in HIV-1 disease may alter monocyte subpopulations and activation phenotype, contributing to a pro-atherothrombotic state that may drive cardiovascular risk in HIV-1 infection.
Resumo:
Translocations in myeloma are thought to occur solely in mature B cells in the germinal center through class switch recombination (CSR). We used a targeted captured technique followed by massively parallel sequencing to determine the exact breakpoints in both the immunoglobulin heavy chain (IGH) locus and the partner chromosome in 61 presentation multiple myeloma samples. The majority of samples (62%) have a breakpoint within the switch regions upstream of the IGH constant genes and are generated through CSR in a mature B cell. However, the proportion of CSR translocations is not consistent between cytogenetic subgroups. We find that 100% of t(4;14) are CSR-mediated; however, 21% of t(11;14) and 25% of t(14;20) are generated through DH-JH recombination activation gene-mediated mechanisms, indicating they occur earlier in B-cell development at the pro-B-cell stage in the bone marrow. These 2 groups also generate translocations through receptor revision, as determined by the breakpoints and mutation status of the segments used in 10% and 50% of t(11;14) and t(14;20) samples, respectively. The study indicates that in a significant number of cases the translocation-based etiological events underlying myeloma may arise at the pro-B-cell hematological progenitor cell level, much earlier in B-cell development than was previously thought.
Resumo:
Mode of access: Internet.
Resumo:
Bis-(3´-5´)-cyclic dimeric guanosine monophosphate, or cyclic di-GMP (c-di-GMP) is a ubiquitous bacterial second messenger that regulates processes such biofilm formation, motility, and virulence. C-di-GMP is synthesized by diguanylate cyclases (DGCs), while phosphodiesterases (PDE-As) end signaling by linearizing c-di-GMP to 5ʹ-phosphoguanylyl-(3ʹ,5ʹ)-guanosine (pGpG), which is then hydrolyzed to two GMPs by previously unidentified enzymes termed PDE-Bs. To identify the PDE-B responsible for pGpG turnover, a screen for pGpG binding proteins in a Vibrio cholerae open reading frame library was conducted to identify potential pGpG binding proteins. This screen led to identification of oligoribonuclease (Orn). Purified Orn binds to pGpG and can cleave pGpG to GMP in vitro. A deletion mutant of orn in Pseudomonas aeruginosa was highly defective in pGpG turnover and accumulated pGpG. Deletion of orn also resulted in accumulation c-di-GMP, likely through pGpG-mediated inhibition of the PDE-As, causing an increase in c-di-GMP-governed auto-aggregation and biofilm. Thus, we found that Orn serves as the primary PDE-B enzyme in P. aeruginosa that removes pGpG, which is necessary to complete the final step in the c-di-GMP degradation pathway. However, not all bacteria that utilize c-di-GMP signaling also have an ortholog of orn, suggesting that other PDE-Bs must be present. Therefore, we asked whether RNases that cleave small oligoribonucleotides in other species could also act as PDE-Bs. NrnA, NrnB, and NrnC can rapidly degrade pGpG to GMP. Furthermore, they can reduce the elevated aggregation and biofilm formation in P. aeruginosa ∆orn. Together, these results indicate that rather than having a single dedicated PDE-B, different bacteria utilize distinct RNases to cleave pGpG and complete c-di-GMP signaling. The ∆orn strain also has a growth defect, indicating changes in other regulatory processes that could be due to pGpG accumulation, c-di-GMP accumulation, or another effect due to loss of Orn. We sought to investigate the genetic pathways responsible for these growth defect phenotypes by use of a transposon suppressor screen, and also investigated transcriptional changes using RNA-Seq. This work identifies that c-di-GMP degradation intersects with RNA degradation at the point of the Orn and the functionally related RNases.
Resumo:
This paper explores the effect of using regional data for livestock attributes on estimation of greenhouse gas (GHG) emissions for the northern beef industry in Australia, compared with using state/territory-wide values, as currently used in Australia’s national GHG inventory report. Regional GHG emissions associated with beef production are reported for 21 defined agricultural statistical regions within state/territory jurisdictions. A management scenario for reduced emissions that could qualify as an Emissions Reduction Fund (ERF) project was used to illustrate the effect of regional level model parameters on estimated abatement levels. Using regional parameters, instead of state level parameters, for liveweight (LW), LW gain and proportion of cows lactating and an expanded number of livestock classes, gives a 5.2% reduction in estimated emissions (range +12% to –34% across regions). Estimated GHG emissions intensity (emissions per kilogram of LW sold) varied across the regions by up to 2.5-fold, ranging from 10.5 kg CO2-e kg–1 LW sold for Darling Downs, Queensland, through to 25.8 kg CO2-e kg–1 LW sold for the Pindan and North Kimberley, Western Australia. This range was driven by differences in production efficiency, reproduction rate, growth rate and survival. This suggests that some regions in northern Australia are likely to have substantial opportunities for GHG abatement and higher livestock income. However, this must be coupled with the availability of management activities that can be implemented to improve production efficiency; wet season phosphorus (P) supplementation being one such practice. An ERF case study comparison showed that P supplementation of a typical-sized herd produced an estimated reduction of 622 t CO2-e year–1, or 7%, compared with a non-P supplemented herd. However, the different model parameters used by the National Inventory Report and ERF project means that there was an anomaly between the herd emissions for project cattle excised from the national accounts (13 479 t CO2-e year–1) and the baseline herd emissions estimated for the ERF project (8 896 t CO2-e year–1) before P supplementation was implemented. Regionalising livestock model parameters in both ERF projects and the national accounts offers the attraction of being able to more easily and accurately reflect emissions savings from this type of emissions reduction project in Australia’s national GHG accounts.
Resumo:
This dissertation research points out major challenging problems with current Knowledge Organization (KO) systems, such as subject gateways or web directories: (1) the current systems use traditional knowledge organization systems based on controlled vocabulary which is not very well suited to web resources, and (2) information is organized by professionals not by users, which means it does not reflect intuitively and instantaneously expressed users’ current needs. In order to explore users’ needs, I examined social tags which are user-generated uncontrolled vocabulary. As investment in professionally-developed subject gateways and web directories diminishes (support for both BUBL and Intute, examined in this study, is being discontinued), understanding characteristics of social tagging becomes even more critical. Several researchers have discussed social tagging behavior and its usefulness for classification or retrieval; however, further research is needed to qualitatively and quantitatively investigate social tagging in order to verify its quality and benefit. This research particularly examined the indexing consistency of social tagging in comparison to professional indexing to examine the quality and efficacy of tagging. The data analysis was divided into three phases: analysis of indexing consistency, analysis of tagging effectiveness, and analysis of tag attributes. Most indexing consistency studies have been conducted with a small number of professional indexers, and they tended to exclude users. Furthermore, the studies mainly have focused on physical library collections. This dissertation research bridged these gaps by (1) extending the scope of resources to various web documents indexed by users and (2) employing the Information Retrieval (IR) Vector Space Model (VSM) - based indexing consistency method since it is suitable for dealing with a large number of indexers. As a second phase, an analysis of tagging effectiveness with tagging exhaustivity and tag specificity was conducted to ameliorate the drawbacks of consistency analysis based on only the quantitative measures of vocabulary matching. Finally, to investigate tagging pattern and behaviors, a content analysis on tag attributes was conducted based on the FRBR model. The findings revealed that there was greater consistency over all subjects among taggers compared to that for two groups of professionals. The analysis of tagging exhaustivity and tag specificity in relation to tagging effectiveness was conducted to ameliorate difficulties associated with limitations in the analysis of indexing consistency based on only the quantitative measures of vocabulary matching. Examination of exhaustivity and specificity of social tags provided insights into particular characteristics of tagging behavior and its variation across subjects. To further investigate the quality of tags, a Latent Semantic Analysis (LSA) was conducted to determine to what extent tags are conceptually related to professionals’ keywords and it was found that tags of higher specificity tended to have a higher semantic relatedness to professionals’ keywords. This leads to the conclusion that the term’s power as a differentiator is related to its semantic relatedness to documents. The findings on tag attributes identified the important bibliographic attributes of tags beyond describing subjects or topics of a document. The findings also showed that tags have essential attributes matching those defined in FRBR. Furthermore, in terms of specific subject areas, the findings originally identified that taggers exhibited different tagging behaviors representing distinctive features and tendencies on web documents characterizing digital heterogeneous media resources. These results have led to the conclusion that there should be an increased awareness of diverse user needs by subject in order to improve metadata in practical applications. This dissertation research is the first necessary step to utilize social tagging in digital information organization by verifying the quality and efficacy of social tagging. This dissertation research combined both quantitative (statistics) and qualitative (content analysis using FRBR) approaches to vocabulary analysis of tags which provided a more complete examination of the quality of tags. Through the detailed analysis of tag properties undertaken in this dissertation, we have a clearer understanding of the extent to which social tagging can be used to replace (and in some cases to improve upon) professional indexing.
Resumo:
Every Argo data file submitted by a DAC for distribution on the GDAC has its format and data consistency checked by the Argo FileChecker. Two types of checks are applied: 1. Format checks. Ensures the file formats match the Argo standards precisely. 2. Data consistency checks. Additional data consistency checks are performed on a file after it passes the format checks. These checks do not duplicate any of the quality control checks performed elsewhere. These checks can be thought of as “sanity checks” to ensure that the data are consistent with each other. The data consistency checks enforce data standards and ensure that certain data values are reasonable and/or consistent with other information in the files. Examples of the “data standard” checks are the “mandatory parameters” defined for meta-data files and the technical parameter names in technical data files. Files with format or consistency errors are rejected by the GDAC and are not distributed. Less serious problems will generate warnings and the file will still be distributed on the GDAC. Reference Tables and Data Standards: Many of the consistency checks involve comparing the data to the published reference tables and data standards. These tables are documented in the User’s Manual. (The FileChecker implements “text versions” of these tables.)
Resumo:
A Similar Exposure Group (SEG) can be created through the evaluation of workers performing the same or similar task, hazards they are exposed to, frequency and duration of their exposures, engineering controls available during their operations, personal protective equipment used, and exposure data. For this report, the samples of one facility that has collected nearly 40,000 various types of samples will be evaluated to determine if the creation of a SEG can be supported. The data will be reviewed for consistency with collection methods and laboratory detection limits. A subset of the samples may be selected based on the review. Data will also be statistically evaluated in order to determine whether the data is sufficient to terminate the sampling. IHDataAnalyst V1.27 will be used to assess the data. This program uses Bayesian Analysis to assist in making determinations. The 95 percent confidence interval will be calculated and evaluated in making decisions. This evaluation will be used to determine if a SEG can be created for any of the workers and determine the need for future sample collection. The data and evaluation presented in this report have been selected and evaluated specifically for the purposes of this project.
Resumo:
Effective decision making uses various databases including both micro and macro level datasets. In many cases it is a big challenge to ensure the consistency of the two levels. Different types of problems can occur and several methods can be used to solve them. The paper concentrates on the input alignment of the households’ income for microsimulation, which means refers to improving the elements of a micro data survey (EU-SILC) by using macro data from administrative sources. We use a combined micro-macro model called ECONS-TAX for this improvement. We also produced model projections until 2015 which is important because the official EU-SILC micro database will only be available in Hungary in the summer of 2017. The paper presents our estimations about the dynamics of income elements and the changes in income inequalities. Results show that the aligned data provides a different level of income inequality, but does not affect the direction of change from year to year. However, when we analyzed policy change, the use of aligned data caused larger differences both in income levels and in their dynamics.
Resumo:
Currently, the Division of Appeals and Hearings of the South Carolina Department of Health and Human Services has no specific presence on the agency's website to provide information or to allow for the electronic submission of appeals. This project's focus was developing an online presence for the Division on SCDHHS' website. The page will make the Division's procedures publicly available to beneficiaries, providers, and agency program staff who attend hearings. Additionally, parties will have a secure online portal through which they can file appeals and upload supporting documentation, reducing the need to send appeals via first class mail. The online appeal portal will further the agency' s goal of reducing paper.
Resumo:
Collecting ground truth data is an important step to be accomplished before performing a supervised classification. However, its quality depends on human, financial and time ressources. It is then important to apply a validation process to assess the reliability of the acquired data. In this study, agricultural infomation was collected in the Brazilian Amazonian State of Mato Grosso in order to map crop expansion based on MODIS EVI temporal profiles. The field work was carried out through interviews for the years 2005-2006 and 2006-2007. This work presents a methodology to validate the training data quality and determine the optimal sample to be used according to the classifier employed. The technique is based on the detection of outlier pixels for each class and is carried out by computing Mahalanobis distances for each pixel. The higher the distance, the further the pixel is from the class centre. Preliminary observations through variation coefficent validate the efficiency of the technique to detect outliers. Then, various subsamples are defined by applying different thresholds to exclude outlier pixels from the classification process. The classification results prove the robustness of the Maximum Likelihood and Spectral Angle Mapper classifiers. Indeed, those classifiers were insensitive to outlier exclusion. On the contrary, the decision tree classifier showed better results when deleting 7.5% of pixels in the training data. The technique managed to detect outliers for all classes. In this study, few outliers were present in the training data, so that the classification quality was not deeply affected by the outliers.
Resumo:
The objectives of this study were to develop a questionnaire that evaluates the perception of nursing workers to job factors that may contribute to musculoskeletal symptoms, and to evaluate its psychometric properties. Internationally recommended methodology was followed: construction of domains, items and the instrument as a whole, content validity, and pre-test. Psychometric properties were evaluated among 370 nursing workers. Construct validity was analyzed by the factorial analysis, known-groups technique, and convergent validity. Reliability was assessed through internal consistency and stability. Results indicated satisfactory fit indices during confirmatory factor analysis, significant difference (p < 0.01) between the responses of nursing and office workers, and moderate correlations between the new questionnaire and Numeric Pain Scale, SF-36 and WRFQ. Cronbach's alpha was close to 0.90 and ICC values ranged from 0.64 to 0.76. Therefore, results indicated that the new questionnaire had good psychometric properties for use in studies involving nursing workers.
Resumo:
to assess the construct validity and reliability of the Pediatric Patient Classification Instrument. correlation study developed at a teaching hospital. The classification involved 227 patients, using the pediatric patient classification instrument. The construct validity was assessed through the factor analysis approach and reliability through internal consistency. the Exploratory Factor Analysis identified three constructs with 67.5% of variance explanation and, in the reliability assessment, the following Cronbach's alpha coefficients were found: 0.92 for the instrument as a whole; 0.88 for the Patient domain; 0.81 for the Family domain; 0.44 for the Therapeutic procedures domain. the instrument evidenced its construct validity and reliability, and these analyses indicate the feasibility of the instrument. The validation of the Pediatric Patient Classification Instrument still represents a challenge, due to its relevance for a closer look at pediatric nursing care and management. Further research should be considered to explore its dimensionality and content validity.