891 resultados para academic programming
Cost savings from relaxation of operational constraints on a power system with high wind penetration
Resumo:
Wind energy is predominantly a nonsynchronous generation source. Large-scale integration of wind generation with existing electricity systems, therefore, presents challenges in maintaining system frequency stability and local voltage stability. Transmission system operators have implemented system operational constraints (SOCs) in order to maintain stability with high wind generation, but imposition of these constraints results in higher operating costs. A mixed integer programming tool was used to simulate generator dispatch in order to assess the impact of various SOCs on generation costs. Interleaved day-ahead scheduling and real-time dispatch models were developed to allow accurate representation of forced outages and wind forecast errors, and were applied to the proposed Irish power system of 2020 with a wind penetration of 32%. Savings of at least 7.8% in generation costs and reductions in wind curtailment of 50% were identified when the most influential SOCs were relaxed. The results also illustrate the need to relax local SOCs together with the system-wide nonsynchronous penetration limit SOC, as savings from increasing the nonsynchronous limit beyond 70% were restricted without relaxation of local SOCs. The methodology and results allow for quantification of the costs of SOCs, allowing the optimal upgrade path for generation and transmission infrastructure to be determined.
Resumo:
A report from the inaugural CONUL (Consortium of National & University Libraries) conference held in the Radisson Blu Hotel, Athlone, June 3rd & 4th 2015.
Resumo:
This article describes advances in statistical computation for large-scale data analysis in structured Bayesian mixture models via graphics processing unit (GPU) programming. The developments are partly motivated by computational challenges arising in fitting models of increasing heterogeneity to increasingly large datasets. An example context concerns common biological studies using high-throughput technologies generating many, very large datasets and requiring increasingly high-dimensional mixture models with large numbers of mixture components.We outline important strategies and processes for GPU computation in Bayesian simulation and optimization approaches, give examples of the benefits of GPU implementations in terms of processing speed and scale-up in ability to analyze large datasets, and provide a detailed, tutorial-style exposition that will benefit readers interested in developing GPU-based approaches in other statistical models. Novel, GPU-oriented approaches to modifying existing algorithms software design can lead to vast speed-up and, critically, enable statistical analyses that presently will not be performed due to compute time limitations in traditional computational environments. Supplementalmaterials are provided with all source code, example data, and details that will enable readers to implement and explore the GPU approach in this mixture modeling context. © 2010 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.
Resumo:
Gemstone Team HOPE (Hospital Optimal Productivity Enterprise)
Resumo:
Gemstone Team Peace in Prisons
Resumo:
Programmed death is often associated with a bacterial stress response. This behavior appears paradoxical, as it offers no benefit to the individual. This paradox can be explained if the death is 'altruistic': the killing of some cells can benefit the survivors through release of 'public goods'. However, the conditions where bacterial programmed death becomes advantageous have not been unambiguously demonstrated experimentally. Here, we determined such conditions by engineering tunable, stress-induced altruistic death in the bacterium Escherichia coli. Using a mathematical model, we predicted the existence of an optimal programmed death rate that maximizes population growth under stress. We further predicted that altruistic death could generate the 'Eagle effect', a counter-intuitive phenomenon where bacteria appear to grow better when treated with higher antibiotic concentrations. In support of these modeling insights, we experimentally demonstrated both the optimality in programmed death rate and the Eagle effect using our engineered system. Our findings fill a critical conceptual gap in the analysis of the evolution of bacterial programmed death, and have implications for a design of antibiotic treatment.
Resumo:
© 2013 American Psychological Association.This meta-analysis synthesizes research on the effectiveness of intelligent tutoring systems (ITS) for college students. Thirty-five reports were found containing 39 studies assessing the effectiveness of 22 types of ITS in higher education settings. Most frequently studied were AutoTutor, Assessment and Learning in Knowledge Spaces, eXtended Tutor-Expert System, and Web Interface for Statistics Education. Major findings include (a) Overall, ITS had a moderate positive effect on college students' academic learning (g = .32 to g = .37); (b) ITS were less effective than human tutoring, but they outperformed all other instruction methods and learning activities, including traditional classroom instruction, reading printed text or computerized materials, computer-assisted instruction, laboratory or homework assignments, and no-treatment control; (c) ITS's effectiveness did not significantly differ by different ITS, subject domain, or the manner or degree of their involvement in instruction and learning; and (d) effectiveness in earlier studies appeared to be significantly greater than that in more recent studies. In addition, there is some evidence suggesting the importance of teachers and pedagogy in ITS-assisted learning.
Resumo:
Activation of CD4+ T cells results in rapid proliferation and differentiation into effector and regulatory subsets. CD4+ effector T cell (Teff) (Th1 and Th17) and Treg subsets are metabolically distinct, yet the specific metabolic differences that modify T cell populations are uncertain. Here, we evaluated CD4+ T cell populations in murine models and determined that inflammatory Teffs maintain high expression of glycolytic genes and rely on high glycolytic rates, while Tregs are oxidative and require mitochondrial electron transport to proliferate, differentiate, and survive. Metabolic profiling revealed that pyruvate dehydrogenase (PDH) is a key bifurcation point between T cell glycolytic and oxidative metabolism. PDH function is inhibited by PDH kinases (PDHKs). PDHK1 was expressed in Th17 cells, but not Th1 cells, and at low levels in Tregs, and inhibition or knockdown of PDHK1 selectively suppressed Th17 cells and increased Tregs. This alteration in the CD4+ T cell populations was mediated in part through ROS, as N-acetyl cysteine (NAC) treatment restored Th17 cell generation. Moreover, inhibition of PDHK1 modulated immunity and protected animals against experimental autoimmune encephalomyelitis, decreasing Th17 cells and increasing Tregs. Together, these data show that CD4+ subsets utilize and require distinct metabolic programs that can be targeted to control specific T cell populations in autoimmune and inflammatory diseases.
Resumo:
BACKGROUND: In the domain of academia, the scholarship of research may include, but not limited to, peer-reviewed publications, presentations, or grant submissions. Programmatic research productivity is one of many measures of academic program reputation and ranking. Another measure or tool for quantifying learning success among physical therapists education programs in the USA is 100 % three year pass rates of graduates on the standardized National Physical Therapy Examination (NPTE). In this study, we endeavored to determine if there was an association between research productivity through artifacts and 100 % three year pass rates on the NPTE. METHODS: This observational study involved using pre-approved database exploration representing all accredited programs in the USA who graduated physical therapists during 2009, 2010 and 2011. Descriptive variables captured included raw research productivity artifacts such as peer reviewed publications and books, number of professional presentations, number of scholarly submissions, total grant dollars, and numbers of grants submitted. Descriptive statistics and comparisons (using chi square and t-tests) among program characteristics and research artifacts were calculated. Univariate logistic regression analyses, with appropriate control variables were used to determine associations between research artifacts and 100 % pass rates. RESULTS: Number of scholarly artifacts submitted, faculty with grants, and grant proposals submitted were significantly higher in programs with 100 % three year pass rates. However, after controlling for program characteristics such as grade point average, diversity percentage of cohort, public/private institution, and number of faculty, there were no significant associations between scholarly artifacts and 100 % three year pass rates. CONCLUSIONS: Factors outside of research artifacts are likely better predictors for passing the NPTE.
Resumo:
An abstract of this work will be presented at the Compiler, Architecture and Tools Conference (CATC), Intel Development Center, Haifa, Israel November 23, 2015.
Resumo:
We have shown a description of the changes and innovations happened in Spain concerning the research on Mathematics Education during the last 25 years, highlighting specially the fast development of the last 10 years. Neither of these great and striking changes would have taken place if there was not been an evolution within the Spanish society, and particularly, within its educational system. Thanks to this, we have found the appropriate conditions for research development.
Resumo:
Of key importance to oil and gas companies is the size distribution of fields in the areas that they are drilling. Recent arguments suggest that there are many more fields yet to be discovered in mature provinces than had previously been thought because the underlying distribution is monotonic not peaked. According to this view the peaked nature of the distribution for discovered fields reflects not the underlying distribution but the effect of economic truncation. This paper contributes to the discussion by analysing up-to-date exploration and discovery data for two mature provinces using the discovery-process model, based on sampling without replacement and implicitly including economic truncation effects. The maximum likelihood estimation involved generates a high-dimensional mixed-integer nonlinear optimization problem. A highly efficient solution strategy is tested, exploiting the separable structure and handling the integer constraints by treating the problem as a masked allocation problem in dynamic programming.
Resumo:
Three paradigms for distributed-memory parallel computation that free the application programmer from the details of message passing are compared for an archetypal structured scientific computation -- a nonlinear, structured-grid partial differential equation boundary value problem -- using the same algorithm on the same hardware. All of the paradigms -- parallel languages represented by the Portland Group's HPF, (semi-)automated serial-to-parallel source-to-source translation represented by CAP-Tools from the University of Greenwich, and parallel libraries represented by Argonne's PETSc -- are found to be easy to use for this problem class, and all are reasonably effective in exploiting concurrency after a short learning curve. The level of involvement required by the application programmer under any paradigm includes specification of the data partitioning, corresponding to a geometrically simple decomposition of the domain of the PDE. Programming in SPMD style for the PETSc library requires writing only the routines that discretize the PDE and its Jacobian, managing subdomain-to-processor mappings (affine global-to-local index mappings), and interfacing to library solver routines. Programming for HPF requires a complete sequential implementation of the same algorithm as a starting point, introduction of concurrency through subdomain blocking (a task similar to the index mapping), and modest experimentation with rewriting loops to elucidate to the compiler the latent concurrency. Programming with CAPTools involves feeding the same sequential implementation to the CAPTools interactive parallelization system, and guiding the source-to-source code transformation by responding to various queries about quantities knowable only at runtime. Results representative of "the state of the practice" for a scaled sequence of structured grid problems are given on three of the most important contemporary high-performance platforms: the IBM SP, the SGI Origin 2000, and the CRAYY T3E.