215 resultados para Generalized Basic Hypergeometric Functions
Resumo:
While it is commonly accepted that computability on a Turing machine in polynomial time represents a correct formalization of the notion of a feasibly computable function, there is no similar agreement on how to extend this notion on functionals, that is, what functionals should be considered feasible. One possible paradigm was introduced by Mehlhorn, who extended Cobham's definition of feasible functions to type 2 functionals. Subsequently, this class of functionals (with inessential changes of the definition) was studied by Townsend who calls this class POLY, and by Kapron and Cook who call the same class basic feasible functionals. Kapron and Cook gave an oracle Turing machine model characterisation of this class. In this article, we demonstrate that the class of basic feasible functionals has recursion theoretic properties which naturally generalise the corresponding properties of the class of feasible functions, thus giving further evidence that the notion of feasibility of functionals mentioned above is correctly chosen. We also improve the Kapron and Cook result on machine representation.Our proofs are based on essential applications of logic. We introduce a weak fragment of second order arithmetic with second order variables ranging over functions from NN which suitably characterises basic feasible functionals, and show that it is a useful tool for investigating the properties of basic feasible functionals. In particular, we provide an example how one can extract feasible programs from mathematical proofs that use nonfeasible functions.
Resumo:
There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros
Resumo:
In recent years the development and use of crash prediction models for roadway safety analyses have received substantial attention. These models, also known as safety performance functions (SPFs), relate the expected crash frequency of roadway elements (intersections, road segments, on-ramps) to traffic volumes and other geometric and operational characteristics. A commonly practiced approach for applying intersection SPFs is to assume that crash types occur in fixed proportions (e.g., rear-end crashes make up 20% of crashes, angle crashes 35%, and so forth) and then apply these fixed proportions to crash totals to estimate crash frequencies by type. As demonstrated in this paper, such a practice makes questionable assumptions and results in considerable error in estimating crash proportions. Through the use of rudimentary SPFs based solely on the annual average daily traffic (AADT) of major and minor roads, the homogeneity-in-proportions assumption is shown not to hold across AADT, because crash proportions vary as a function of both major and minor road AADT. For example, with minor road AADT of 400 vehicles per day, the proportion of intersecting-direction crashes decreases from about 50% with 2,000 major road AADT to about 15% with 82,000 AADT. Same-direction crashes increase from about 15% to 55% for the same comparison. The homogeneity-in-proportions assumption should be abandoned, and crash type models should be used to predict crash frequency by crash type. SPFs that use additional geometric variables would only exacerbate the problem quantified here. Comparison of models for different crash types using additional geometric variables remains the subject of future research.
Resumo:
Summary of Actions Towards Sustainable Outcomes Environmental Issues / Principal Impacts The increased growth of cities is intensifying its impact on people and the environment through: • increased use of energy for the heating and cooling of more buildings, leading to urban heat islands and more greenhouse gas emissions • increased amount of hard surfaces contributing to higher temperatures in cities and more stormwater runoff • degraded air quality and noise impact • reduced urban biodiversity • compromised health and general well-being of people Basic Strategies In many design situations boundaries and constraints limit the application of cutting EDGe actions. In these circumstances designers should at least consider the following: • Consider green roofs early in the design process in consultation with all stakeholders to enable maximised integration with building systems and to mitigate building cost (avoid constructing as a retrofit). • Design of the green roof as part of a building’s structural, mechanical and hydraulic systems could lead to structural efficiency, the ability to optimise cooling benefits and better integrated water recycling systems. • Inform the selection of the type of green roof by considering its function, for example designing for social activity, required maintenance/access regime, recycling of water or habitat regeneration or a combination of uses. • Evaluate existing surroundings to determine possible links to the natural environment and choice of vegetation for the green roof with availability of local plant supply and expertise. Cutting EDGe Strategies • Create green roofs to contribute positively to the environment through reduced urban heat island effect and building temperatures, to improved stormwater quality, increased natural habitats, provision of social spaces and opportunity for increased local food supply. • Maximise solar panel efficiency by incorporating with design of green roof. • Integrate multiple functions for a single green roof such as grey water recycling, food production, more bio-diverse plantings, air quality improvement and provision of delightful spaces for social interaction. Synergies & references • BEDP Environment Design Guide DES 53: Roof and Facade Gardens GEN 4: Positive Development – designing for Net Positive Impacts TEC 26: Living Walls - a way to green the built environment • Green Roofs Australia: www.greenroofs.wordpress.com • International Green Roof Association: www.igra-world.com • Green Roofs for Healthy Cities (USA): www.greenroofs.org • Centre for Urban Greenery and Ecology (Singapore): http://research.cuge.com.sg
Resumo:
In recent years, local government infrastructure management practices have evolved from conventional land use planning to more wide ranging and integrated urban growth and infrastructure management approaches. The roles and responsibilities of local government are no longer simply to manage daily operational functions of a city and provide basic infrastructure. Local governments are now required to undertake economic planning, manage urban growth; be involved in major infrastructure planning; and even engage in achieving sustainable development objectives. The Brisbane Urban Growth model has proven initially successful to ensure timely and coordinated delivery of urban infrastructure. This model may be the first step for many local governments to move toward an integrated, sustainable and effective infrastructure management.
Resumo:
The Arabidopsis thaliana NPR1 has been shown to be a key regulator of gene expression during the onset of a plant disease-resistance response known as systemic acquired resistance. The npr1 mutant plants fail to respond to systemic acquired resistance-inducing signals such as salicylic acid (SA), or express SA-induced pathogenesis-related (PR) genes. Using NPR1 as bait in a yeast two-hybrid screen, we identified a subclass of transcription factors in the basic leucine zipper protein family (AHBP-1b and TGA6) and showed that they interact specifically in yeast and in vitro with NPR1. Point mutations that abolish the NPR1 function in A. thaliana also impair the interactions between NPR1 and the transcription factors in the yeast two-hybrid assay. Furthermore, a gel mobility shift assay showed that the purified transcription factor protein, AHBP-1b, binds specifically to an SA-responsive promoter element of the A. thaliana PR-1 gene. These data suggest that NPR1 may regulate PR-1 gene expression by interacting with a subclass of basic leucine zipper protein transcription factors.
Resumo:
The collaboration of clinicians with basic science researchers is crucial for addressing clinically relevant research questions. In order to initiate such mutually beneficial relationships, we propose a model where early career clinicians spend a designated time embedded in established basic science research groups, in order to pursue a postgraduate qualification. During this time, clinicians become integral members of the research team, fostering long term relationships and opening up opportunities for continuing collaboration. However, for these collaborations to be successful there are pitfalls to be avoided. Limited time and funding can lead to attempts to answer clinical challenges with highly complex research projects characterised by a large number of "clinical" factors being introduced in the hope that the research outcomes will be more clinically relevant. As a result, the complexity of such studies and variability of its outcomes may lead to difficulties in drawing scientifically justified and clinically useful conclusions. Consequently, we stress that it is the basic science researcher and the clinician's obligation to be mindful of the limitations and challenges of such multi-factorial research projects. A systematic step-by-step approach to address clinical research questions with limited, but highly targeted and well defined research projects provides the solid foundation which may lead to the development of a longer term research program for addressing more challenging clinical problems. Ultimately, we believe that it is such models, encouraging the vital collaboration between clinicians and researchers for the work on targeted, well defined research projects, which will result in answers to the important clinical challenges of today.
Resumo:
The NIR spectra of reichenbachite, scholzite and parascholzite have been studied at 298 K. The spectra of the minerals are different, in line with composition and crystal structural variations. Cation substitution effects are significant in their electronic spectra and three distinctly different electronic transition bands are observed in the near-infrared spectra at high wavenumbers in the 12000-7600 cm-1 spectral region. Reichenbachite electronic spectrum is characterised by Cu(II) transition bands at 9755 and 7520 cm-1. A broad spectral feature observed for ferrous ion in the 12000-9000 cm-1 region both in scholzite and parascholzite. Some what similarities in the vibrational spectra of the three phosphate minerals are observed particularly in the OH stretching region. The observation of strong band at 5090 cm-1 indicates strong hydrogen bonding in the structure of the dimorphs, scholzite and parascholzite. The three phosphates exhibit overlapping bands in the 4800-4000 cm-1 region resulting from the combinations of vibrational modes of (PO4)3- units.
Resumo:
Dr. Richard Shapcott is the senior lecturer in International Relations at the University of Queensland. His areas of interest in research concern international ethics, cosmopolitan political theory and cultural diversity. He is the author of the recently published book titled International Ethics: A Critical Introduction; and several other pieces, such as, “Anti-Cosmopolitanism, the Cosmopolitan Harm Principle and Global Dialogue,” in Michalis’ and Petito’s book, Civilizational Dialogue and World Order. He’s also the author of “Dialogue and International Ethics: Religion, Cultural Diversity and Universalism, in Patrick Hayden’s, The Ashgate Research Companion to Ethics and International Relations.
Resumo:
This article applies social network analysis techniques to a case study of police corruption in order to produce findings which will assist in corruption prevention and investigation. Police corruption is commonly studied but rarely are sophisticated tools of analyse engaged to add rigour to the field of study. This article analyses the ‘First Joke’ a systemic and long lasting corruption network in the Queensland Police Force, a state police agency in Australia. It uses the data obtained from a commission of inquiry which exposed the network and develops hypotheses as to the nature of the networks structure based on existing literature into dark networks and criminal networks. These hypotheses are tested by entering the data into UCINET and analysing the outcomes through social network analysis measures of average path distance, centrality and density. The conclusions reached show that the network has characteristics not predicted by the literature.