79 resultados para multi-factor models
Resumo:
The air-sea exchange of two legacy persistent organic pollutants (POPs), γ-HCH and PCB 153, in the North Sea, is presented and discussed using results of regional fate and transport and shelf-sea hydrodynamic ocean models for the period 1996–2005. Air-sea exchange occurs through gas exchange (deposition and volatilization), wet deposition and dry deposition. Atmospheric concentrations are interpolated into the model domain from results of the EMEP MSC-East multi-compartmental model (Gusev et al, 2009). The North Sea is net depositional for γ-HCH, and is dominated by gas deposition with notable seasonal variability and a downward trend over the 10 year period. Volatilization rates of γ-HCH are generally a factor of 2–3 less than gas deposition in winter, spring and summer but greater in autumn when the North Sea is net volatilizational. A downward trend in fugacity ratios is found, since gas deposition is decreasing faster than volatilization. The North Sea is net volatilizational for PCB 153, with highest rates of volatilization to deposition found in the areas surrounding polluted British and continental river sources. Large quantities of PCB 153 entering through rivers lead to very high local rates of volatilization.
Resumo:
Models of neutrino-driven core-collapse supernova explosions have matured considerably in recent years. Explosions of low-mass progenitors can routinely be simulated in 1D, 2D, and 3D. Nucleosynthesis calculations indicate that these supernovae could be contributors of some lighter neutron-rich elements beyond iron. The explosion mechanism of more massive stars remains under investigation, although first 3D models of neutrino-driven explosions employing multi-group neutrino transport have become available. Together with earlier 2D models and more simplified 3D simulations, these have elucidated the interplay between neutrino heating and hydrodynamic instabilities in the post-shock region that is essential for shock revival. However, some physical ingredients may still need to be added/improved before simulations can robustly explain supernova explosions over a wide range of progenitors. Solutions recently suggested in the literature include uncertainties in the neutrino rates, rotation, and seed perturbations from convective shell burning. We review the implications of 3D simulations of shell burning in supernova progenitors for the ‘perturbations-aided neutrino-driven mechanism,’ whose efficacy is illustrated by the first successful multi-group neutrino hydrodynamics simulation of an 18 solar mass progenitor with 3D initial conditions. We conclude with speculations about the impact of 3D effects on the structure of massive stars through convective boundary mixing.
Resumo:
The purpose of this study was to determine whether the prevalence and severity of gingival overgrowth in renal transplant recipients concomitantly treated with cyclosporin and a calcium channel blocker was associated with functional polymorphisms within the signal sequence of the transforming growth factor-(TGF)beta1 gene.
Resumo:
The first evidence of x-ray harmonic radiation extending to 3.3 A, 3.8 keV (order n > 3200) from petawatt class laser-solid interactions is presented, exhibiting relativistic limit efficiency scaling (eta similar to n(-2.5)-n(-3)) at multi-keV energies. This scaling holds up to a maximum order, n(RO)similar to 8(1/2)gamma(3), where gamma is the relativistic Lorentz factor, above which the first evidence of an intensity dependent efficiency rollover is observed. The coherent nature of the generated harmonics is demonstrated by the highly directional beamed emission, which for photon energy h nu > 1 keV is found to be into a cone angle similar to 4 degrees, significantly less than that of the incident laser cone (20 degrees).
Resumo:
This study aimed to examine the structure of the statistics anxiety rating scale. Responses from 650 undergraduate psychology students throughout the UK were collected through an on-line study. Based on previous research three different models were specified and estimated using confirmatory factor analysis. Fit indices were used to determine if the model fitted the data and a likelihood ratio difference test was used to determine the best fitting model. The original six factor model was the best explanation of the data. All six subscales were intercorrelated and internally consistent. It was concluded that the statistics anxiety rating scale was found to measure the six subscales it was designed to assess in a UK population.
Resumo:
The Strengths and Difficulties Questionnaire (SDQ) is a widely used 25-item screening test for emotional and behavioral problems in children and adolescents. This study attempted to critically examine the factor structure of the adolescent self-report version. As part of an ongoing longitudinal cohort study, a total of 3,753 pupils completed the SDQ when aged 12. Both three- and five-factor exploratory factor analysis models were estimated. A number of deviations from the hypothesized SDQ structure were observed, including a lack of unidimensionality within particular subscales, cross-loadings, and items failing to load on any factor. Model fit of the confirmatory factor analysis model was modest, providing limited support for the hypothesized five-component structure. The analyses suggested a number of weaknesses within the component structure of the self-report SDQ, particularly in relation to the reverse-coded items.
Resumo:
Abstract Adaptability to changing circumstances is a key feature of living creatures. Understanding such adaptive processes is central to developing successful autonomous artifacts. In this paper two perspectives are brought to bear on the issue of adaptability. The first is a short term perspective which looks at adaptability in terms of the interactions between the agent and the environment. The second perspective involves a hierarchical evolutionary model which seeks to identify higher-order forms of adaptability based on the concept of adaptive meta-constructs. Task orientated and agent-centered models of adaptive processes in artifacts are considered from these two perspectives. The former isrepresented by the fitness function approach found in evolutionary learning, and the latter in terms of the concepts of empowerment and homeokinesis found in models derived from the self-organizing systems approach. A meta-construct approach to adaptability based on the identification of higher level meta-metrics is also outlined. 2009 Published by Elsevier B.V.
Resumo:
Background/Aims: Hepatocellular carcinoma is a leading cause of global cancer mortality, with standard chemotherapy being minimally effective in prolonging survival. We investigated if combined targeting of vascular endothelial growth factor protein and expression might affect hepatocellular carcinoma growth and angiogenesis.
Resumo:
Microsatellite genotyping is a common DNA characterization technique in population, ecological and evolutionary genetics research. Since different alleles are sized relative to internal size-standards, different laboratories must calibrate and standardize allelic designations when exchanging data. This interchange of microsatellite data can often prove problematic. Here, 16 microsatellite loci were calibrated and standardized for the Atlantic salmon, Salmo salar, across 12 laboratories. Although inconsistencies were observed, particularly due to differences between migration of DNA fragments and actual allelic size ('size shifts'), inter-laboratory calibration was successful. Standardization also allowed an assessment of the degree and partitioning of genotyping error. Notably, the global allelic error rate was reduced from 0.05 ± 0.01 prior to calibration to 0.01 ± 0.002 post-calibration. Most errors were found to occur during analysis (i.e. when size-calling alleles; the mean proportion of all errors that were analytical errors across loci was 0.58 after calibration). No evidence was found of an association between the degree of error and allelic size range of a locus, number of alleles, nor repeat type, nor was there evidence that genotyping errors were more prevalent when a laboratory analyzed samples outside of the usual geographic area they encounter. The microsatellite calibration between laboratories presented here will be especially important for genetic assignment of marine-caught Atlantic salmon, enabling analysis of marine mortality, a major factor in the observed declines of this highly valued species.
Resumo:
PURPOSE The appropriate selection of patients for early clinical trials presents a major challenge. Previous analyses focusing on this problem were limited by small size and by interpractice heterogeneity. This study aims to define prognostic factors to guide risk-benefit assessments by using a large patient database from multiple phase I trials. PATIENTS AND METHODS Data were collected from 2,182 eligible patients treated in phase I trials between 2005 and 2007 in 14 European institutions. We derived and validated independent prognostic factors for 90-day mortality by using multivariate logistic regression analysis. Results The 90-day mortality was 16.5% with a drug-related death rate of 0.4%. Trial discontinuation within 3 weeks occurred in 14% of patients primarily because of disease progression. Eight different prognostic variables for 90-day mortality were validated: performance status (PS), albumin, lactate dehydrogenase, alkaline phosphatase, number of metastatic sites, clinical tumor growth rate, lymphocytes, and WBC. Two different models of prognostic scores for 90-day mortality were generated by using these factors, including or excluding PS; both achieved specificities of more than 85% and sensitivities of approximately 50% when using a score cutoff of 5 or higher. These models were not superior to the previously published Royal Marsden Hospital score in their ability to predict 90-day mortality. CONCLUSION Patient selection using any of these prognostic scores will reduce non-drug-related 90-day mortality among patients enrolled in phase I trials by 50%. However, this can be achieved only by an overall reduction in recruitment to phase I studies of 20%, more than half of whom would in fact have survived beyond 90 days.
Resumo:
The aim of this study was to investigate the subjective experience of acquired deafness using quantitative (questionnaire) and qualitative (interview) methods. This paper presents findings from the questionnaire data. Eighty-seven people (of whom 38 had acquired a profound loss) participated in the study. The questionnaire contained items designed to examine both audiological and non-audiological aspects of deafened people's experiences. It also sought to measure the extent to which those aspects affect their quality of life. The questionnaire included three variables (i.e. reported frequency and impact of depression, and overall effect of deafness on one's life) as broad indicators of adjustment. Seventy-three respondents (including all but one of the profound group) completed the questionnaire. Factor analysis of the questionnaire data identified six major themes (with variance >10%) underlying the personal experience of acquired deafness. Three themes-communicative deprivation, restriction, and malinteraction by hearing people-dealt with observable aspects of respondents' experience. Multiple regression found that these factor themes associated with biomedical variables. The remaining three themes dealt with less tangible aspects of the deafness experience. These themes-feelings of distress in interaction, feelings of abandonment and benefit from positive experiences-did not associate with biomedical variables. Finally, multiple regression indicates that respondents' factor scores predict the impact of deafness at least as strongly as their audiological and social characteristics.
Resumo:
Loss of biodiversity and nutrient enrichment are two of the main human impacts on ecosystems globally, yet we understand very little about the interactive effects of multiple stressors on natural communities and how this relates to biodiversity and ecosystem functioning. Advancing our understanding requires the following: (1) incorporation of processes occurring within and among trophic levels in natural ecosystems and (2) tests of context-dependency of species loss effects. We examined the effects of loss of a key predator and two groups of its prey on algal assemblages at both ambient and enriched nutrient conditions in a marine benthic system and tested for interactions between the loss of functional diversity and nutrient enrichment on ecosystem functioning. We found that enrichment interacted with food web structure to alter the effects of species loss in natural communities. At ambient conditions, the loss of primary consumers led to an increase in biomass of algae, whereas predator loss caused a reduction in algal biomass (i.e. a trophic cascade). However, contrary to expectations, we found that nutrient enrichment negated the cascading effect of predators on algae. Moreover, algal assemblage structure varied in distinct ways in response to mussel loss, grazer loss, predator loss and with nutrient enrichment, with compensatory shifts in algal abundance driven by variation in responses of different algal species to different environmental conditions and the presence of different consumers. We identified and characterized several context-dependent mechanisms driving direct and indirect effects of consumers. Our findings highlight the need to consider environmental context when examining potential species redundancies in particular with regard to changing environmental conditions. Furthermore, non-trophic interactions based on empirical evidence must be incorporated into food web-based ecological models to improve understanding of community responses to global change.
Resumo:
Increasingly infrastructure providers are supplying the cloud marketplace with storage and on-demand compute resources to host cloud applications. From an application user's point of view, it is desirable to identify the most appropriate set of available resources on which to execute an application. Resource choice can be complex and may involve comparing available hardware specifications, operating systems, value-added services, such as network configuration or data replication, and operating costs, such as hosting cost and data throughput. Providers' cost models often change and new commodity cost models, such as spot pricing, have been introduced to offer significant savings. In this paper, a software abstraction layer is used to discover infrastructure resources for a particular application, across multiple providers, by using a two-phase constraints-based approach. In the first phase, a set of possible infrastructure resources are identified for a given application. In the second phase, a heuristic is used to select the most appropriate resources from the initial set. For some applications a cost-based heuristic is most appropriate; for others a performance-based heuristic may be used. A financial services application and a high performance computing application are used to illustrate the execution of the proposed resource discovery mechanism. The experimental result shows the proposed model could dynamically select an appropriate set of resouces that match the application's requirements.
Resumo:
A benefit function transfer obtains estimates of willingness-to-pay (WTP) for the evaluation of a given policy at a site by combining existing information from different study sites. This has the advantage that more efficient estimates are obtained, but it relies on the assumption that the heterogeneity between sites is appropriately captured in the benefit transfer model. A more expensive alternative to estimate WTP is to analyze only data from the policy site in question while ignoring information from other sites. We make use of the fact that these two choices can be viewed as a model selection problem and extend the set of models to allow for the hypothesis that the benefit function is only applicable to a subset of sites. We show how Bayesian model averaging (BMA) techniques can be used to optimally combine information from all models. The Bayesian algorithm searches for the set of sites that can form the basis for estimating a benefit function and reveals whether such information can be transferred to new sites for which only a small data set is available. We illustrate the method with a sample of 42 forests from U.K. and Ireland. We find that BMA benefit function transfer produces reliable estimates and can increase about 8 times the information content of a small sample when the forest is 'poolable'. © 2008 Elsevier Inc. All rights reserved.
Resumo:
Alzheimer's disease (AD) and age-related macular degeneration (AMD) are both neurodegenerative disorders which share common pathological and biochemical features of the complement pathway. The aim of this study was to investigate whether there is an association between well replicated AMD genetic risk factors and AD. A large cohort of AD (n = 3898) patients and controls were genotyped for single nucleotide polymorphisms (SNPs) in the complement factor H (CFH), the Age-related maculopathy susceptibility protein 2 (ARMS2) the complement component 2 (C2), the complement factor B (CFB), and the complement component 3 (C3) genes. While significant but modest associations were identified between the complement factor H, the age-related maculopathy susceptibility protein 2, and the complement component 3 single nucleotide polymorphisms and AD, these were different in direction or genetic model to that observed in AMD. In addition the multilocus genetic model that predicts around a half of the sibling risk for AMD does not predict risk for AD. Our study provides further support to the hypothesis that while activation of the alternative complement pathway is central to AMD pathogenesis, it is less involved in AD.