920 resultados para Youth programming
Exploring processes of indeterminate determinism in music composition, programming and improvisation
Resumo:
This portfolio consists of 15 original musical works. Taking the form of electronic and acousmatic music, multimedia, and scores, these chamber works serve as a result of experimentation and improvisation with individually built computer interfaces. The accompanying commentary provides discourse on the conceptual practice of these interfaces becoming a compositional entity that present a multi-interpretative opportunity to explore, engage, and personalise. Following this, the commentary examines the path of creative decisions and musical choices that formed both these interfaces and the resulting musical and visual works. This portfolio is accompanied by interfaces used, transcoded interfacing behavioural information, and documented improvisational findings.
Resumo:
This work considers the static calculation of a program’s average-case time. The number of systems that currently tackle this research problem is quite small due to the difficulties inherent in average-case analysis. While each of these systems make a pertinent contribution, and are individually discussed in this work, only one of them forms the basis of this research. That particular system is known as MOQA. The MOQA system consists of the MOQA language and the MOQA static analysis tool. Its technique for statically determining average-case behaviour centres on maintaining strict control over both the data structure type and the labeling distribution. This research develops and evaluates the MOQA language implementation, and adds to the functions already available in this language. Furthermore, the theory that backs MOQA is generalised and the range of data structures for which the MOQA static analysis tool can determine average-case behaviour is increased. Also, some of the MOQA applications and extensions suggested in other works are logically examined here. For example, the accuracy of classifying the MOQA language as reversible is investigated, along with the feasibility of incorporating duplicate labels into the MOQA theory. Finally, the analyses that take place during the course of this research reveal some of the MOQA strengths and weaknesses. This thesis aims to be pragmatic when evaluating the current MOQA theory, the advancements set forth in the following work and the benefits of MOQA when compared to similar systems. Succinctly, this work’s significant expansion of the MOQA theory is accompanied by a realistic assessment of MOQA’s accomplishments and a serious deliberation of the opportunities available to MOQA in the future.
Resumo:
This article describes advances in statistical computation for large-scale data analysis in structured Bayesian mixture models via graphics processing unit (GPU) programming. The developments are partly motivated by computational challenges arising in fitting models of increasing heterogeneity to increasingly large datasets. An example context concerns common biological studies using high-throughput technologies generating many, very large datasets and requiring increasingly high-dimensional mixture models with large numbers of mixture components.We outline important strategies and processes for GPU computation in Bayesian simulation and optimization approaches, give examples of the benefits of GPU implementations in terms of processing speed and scale-up in ability to analyze large datasets, and provide a detailed, tutorial-style exposition that will benefit readers interested in developing GPU-based approaches in other statistical models. Novel, GPU-oriented approaches to modifying existing algorithms software design can lead to vast speed-up and, critically, enable statistical analyses that presently will not be performed due to compute time limitations in traditional computational environments. Supplementalmaterials are provided with all source code, example data, and details that will enable readers to implement and explore the GPU approach in this mixture modeling context. © 2010 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.
Resumo:
The health of clergy is important, and clergy may find health programming tailored to them more effective. Little is known about existing clergy health programs. We contacted Protestant denominational headquarters and searched academic databases and the Internet. We identified 56 clergy health programs and categorized them into prevention and personal enrichment; counseling; marriage and family enrichment; peer support; congregational health; congregational effectiveness; denominational enrichment; insurance/strategic pension plans; and referral-based programs. Only 13 of the programs engaged in outcomes evaluation. Using the Socioecological Framework, we found that many programs support individual-level and institutional-level changes, but few programs support congregational-level changes. Outcome evaluation strategies and a central repository for information on clergy health programs are needed. © 2011 Springer Science+Business Media, LLC.
Resumo:
Considerable scientific and intervention attention has been paid to judgment and decision-making systems associated with aggressive behavior in youth. However, most empirical studies have investigated social-cognitive correlates of stable child and adolescent aggressiveness, and less is known about real-time decision making to engage in aggressive behavior. A model of real-time decision making must incorporate both impulsive actions and rational thought. The present paper advances a process model (response evaluation and decision; RED) of real-time behavioral judgments and decision making in aggressive youths with mathematic representations that may be used to quantify response strength. These components are a heuristic to describe decision making, though it is doubtful that individuals always mentally complete these steps. RED represents an organization of social-cognitive operations believed to be active during the response decision step of social information processing. The model posits that RED processes can be circumvented through impulsive responding. This article provides a description and integration of thoughtful, rational decision making and nonrational impulsivity in aggressive behavioral interactions.
Resumo:
Gemstone Team Peace in Prisons
Resumo:
Programmed death is often associated with a bacterial stress response. This behavior appears paradoxical, as it offers no benefit to the individual. This paradox can be explained if the death is 'altruistic': the killing of some cells can benefit the survivors through release of 'public goods'. However, the conditions where bacterial programmed death becomes advantageous have not been unambiguously demonstrated experimentally. Here, we determined such conditions by engineering tunable, stress-induced altruistic death in the bacterium Escherichia coli. Using a mathematical model, we predicted the existence of an optimal programmed death rate that maximizes population growth under stress. We further predicted that altruistic death could generate the 'Eagle effect', a counter-intuitive phenomenon where bacteria appear to grow better when treated with higher antibiotic concentrations. In support of these modeling insights, we experimentally demonstrated both the optimality in programmed death rate and the Eagle effect using our engineered system. Our findings fill a critical conceptual gap in the analysis of the evolution of bacterial programmed death, and have implications for a design of antibiotic treatment.
Resumo:
Activation of CD4+ T cells results in rapid proliferation and differentiation into effector and regulatory subsets. CD4+ effector T cell (Teff) (Th1 and Th17) and Treg subsets are metabolically distinct, yet the specific metabolic differences that modify T cell populations are uncertain. Here, we evaluated CD4+ T cell populations in murine models and determined that inflammatory Teffs maintain high expression of glycolytic genes and rely on high glycolytic rates, while Tregs are oxidative and require mitochondrial electron transport to proliferate, differentiate, and survive. Metabolic profiling revealed that pyruvate dehydrogenase (PDH) is a key bifurcation point between T cell glycolytic and oxidative metabolism. PDH function is inhibited by PDH kinases (PDHKs). PDHK1 was expressed in Th17 cells, but not Th1 cells, and at low levels in Tregs, and inhibition or knockdown of PDHK1 selectively suppressed Th17 cells and increased Tregs. This alteration in the CD4+ T cell populations was mediated in part through ROS, as N-acetyl cysteine (NAC) treatment restored Th17 cell generation. Moreover, inhibition of PDHK1 modulated immunity and protected animals against experimental autoimmune encephalomyelitis, decreasing Th17 cells and increasing Tregs. Together, these data show that CD4+ subsets utilize and require distinct metabolic programs that can be targeted to control specific T cell populations in autoimmune and inflammatory diseases.
Resumo:
© 2016 by the Midwest Political Science Association.Recent research has cast doubt on the potential for various electoral reforms to increase voter turnout. In this article, we examine the effectiveness of preregistration laws, which allow young citizens to register before being eligible to vote. We use two empirical approaches to evaluate the impact of preregistration on youth turnout. First, we implement difference-in-difference and lag models to bracket the causal effect of preregistration implementation using the 2000-2012 Current Population Survey. Second, focusing on the state of Florida, we leverage a discontinuity based on date of birth to estimate the effect of increased preregistration exposure on the turnout of young registrants. In both approaches, we find preregistration increases voter turnout, with equal effectiveness for various subgroups in the electorate. More broadly, observed patterns suggest that campaign context and supporting institutions may help to determine when and if electoral reforms are effective.
Resumo:
An abstract of this work will be presented at the Compiler, Architecture and Tools Conference (CATC), Intel Development Center, Haifa, Israel November 23, 2015.
Resumo:
Of key importance to oil and gas companies is the size distribution of fields in the areas that they are drilling. Recent arguments suggest that there are many more fields yet to be discovered in mature provinces than had previously been thought because the underlying distribution is monotonic not peaked. According to this view the peaked nature of the distribution for discovered fields reflects not the underlying distribution but the effect of economic truncation. This paper contributes to the discussion by analysing up-to-date exploration and discovery data for two mature provinces using the discovery-process model, based on sampling without replacement and implicitly including economic truncation effects. The maximum likelihood estimation involved generates a high-dimensional mixed-integer nonlinear optimization problem. A highly efficient solution strategy is tested, exploiting the separable structure and handling the integer constraints by treating the problem as a masked allocation problem in dynamic programming.
Resumo:
Three paradigms for distributed-memory parallel computation that free the application programmer from the details of message passing are compared for an archetypal structured scientific computation -- a nonlinear, structured-grid partial differential equation boundary value problem -- using the same algorithm on the same hardware. All of the paradigms -- parallel languages represented by the Portland Group's HPF, (semi-)automated serial-to-parallel source-to-source translation represented by CAP-Tools from the University of Greenwich, and parallel libraries represented by Argonne's PETSc -- are found to be easy to use for this problem class, and all are reasonably effective in exploiting concurrency after a short learning curve. The level of involvement required by the application programmer under any paradigm includes specification of the data partitioning, corresponding to a geometrically simple decomposition of the domain of the PDE. Programming in SPMD style for the PETSc library requires writing only the routines that discretize the PDE and its Jacobian, managing subdomain-to-processor mappings (affine global-to-local index mappings), and interfacing to library solver routines. Programming for HPF requires a complete sequential implementation of the same algorithm as a starting point, introduction of concurrency through subdomain blocking (a task similar to the index mapping), and modest experimentation with rewriting loops to elucidate to the compiler the latent concurrency. Programming with CAPTools involves feeding the same sequential implementation to the CAPTools interactive parallelization system, and guiding the source-to-source code transformation by responding to various queries about quantities knowable only at runtime. Results representative of "the state of the practice" for a scaled sequence of structured grid problems are given on three of the most important contemporary high-performance platforms: the IBM SP, the SGI Origin 2000, and the CRAYY T3E.