974 resultados para Count Basil
Resumo:
Seasonal patterns have been found in a remarkable range of health conditions, including birth defects, respiratory infections and cardiovascular disease. Accurately estimating the size and timing of seasonal peaks in disease incidence is an aid to understanding the causes and possibly to developing interventions. With global warming increasing the intensity of seasonal weather patterns around the world, a review of the methods for estimating seasonal effects on health is timely. This is the first book on statistical methods for seasonal data written for a health audience. It describes methods for a range of outcomes (including continuous, count and binomial data) and demonstrates appropriate techniques for summarising and modelling these data. It has a practical focus and uses interesting examples to motivate and illustrate the methods. The statistical procedures and example data sets are available in an R package called ‘season’. Adrian Barnett is a senior research fellow at Queensland University of Technology, Australia. Annette Dobson is a Professor of Biostatistics at The University of Queensland, Australia. Both are experienced medical statisticians with a commitment to statistical education and have previously collaborated in research in the methodological developments and applications of biostatistics, especially to time series data. Among other projects, they worked together on revising the well-known textbook "An Introduction to Generalized Linear Models," third edition, Chapman Hall/CRC, 2008. In their new book they share their knowledge of statistical methods for examining seasonal patterns in health.
Resumo:
Robust texture recognition in underwater image sequences for marine pest population control such as Crown-Of-Thorns Starfish (COTS) is a relatively unexplored area of research. Typically, humans count COTS by laboriously processing individual images taken during surveys. Being able to autonomously collect and process images of reef habitat and segment out the various marine biota holds the promise of allowing researchers to gain a greater understanding of the marine ecosystem and evaluate the impact of different environmental variables. This research applies and extends the use of Local Binary Patterns (LBP) as a method for texture-based identification of COTS from survey images. The performance and accuracy of the algorithms are evaluated on a image data set taken on the Great Barrier Reef.
Resumo:
The rhetoric of the pedagogic discourses of landscape architectural students and interior design students is described as part of a doctoral study undertaken to document practices and orientations prior to cross-disciplinary collaboration. We draw on the theoretical framework of Basil Bernstein, an educational sociologist, and the rhetorical method of Kenneth Burke, a literary dramatist, to study the grammars of ‘landscape’ representation employed within these disciplinary examples. We investigate how prepared final year students are for working in a cross-disciplinary manner. The discursive interactions of their work, as illustrated by four examples of drawn images and written text, are described. Our findings suggest that we need to concern ourselves aspects of our pedagogic discourse that brings uniqueness and value to our disciplines ,as well as that shared discourses between disciplines.
Resumo:
The forms social enterprises can take and the industries they operate in are so many and various that it has always been a challenge to define, find and count social enterprises. In 2009 Social Traders partnered with the Australian Centre for Philanthropy and Nonprofit Studies (ACPNS) at Queensland University of Technology to define social enterprise and, for the first time in Australia, to identify and map the social enterprise sector: its scope, its variety of forms, its reasons for trading, its financial dimensions, and the individuals and communities social enterprises aim to benefit.
Resumo:
This paper proposes a flying-capacitor-based chopper circuit for dc capacitor voltage equalization in diode-clamped multilevel inverters. Its important features are reduced voltage stress across the chopper switches, possible reduction in the chopper switching frequency, improved reliability, and ride-through capability enhancement. This topology is analyzed using three- and four-level flying-capacitor-based chopper circuit configurations. These configurations are different in capacitor and semiconductor device count and correspondingly reduce the device voltage stresses by half and one-third, respectively. The detailed working principles and control schemes for these circuits are presented. It is shown that, by preferentially selecting the available chopper switch states, the dc-link capacitor voltages can be efficiently equalized in addition to having tightly regulated flying-capacitor voltages around their references. The various operating modes of the chopper are described along with their preferential selection logic to achieve the desired performances. The performance of the proposed chopper and corresponding control schemes are confirmed through both simulation and experimental investigations.
Resumo:
The most costly operations encountered in pairing computations are those that take place in the full extension field Fpk . At high levels of security, the complexity of operations in Fpk dominates the complexity of the operations that occur in the lower degree subfields. Consequently, full extension field operations have the greatest effect on the runtime of Miller’s algorithm. Many recent optimizations in the literature have focussed on improving the overall operation count by presenting new explicit formulas that reduce the number of subfield operations encountered throughout an iteration of Miller’s algorithm. Unfortunately, almost all of these improvements tend to suffer for larger embedding degrees where the expensive extension field operations far outweigh the operations in the smaller subfields. In this paper, we propose a new way of carrying out Miller’s algorithm that involves new explicit formulas which reduce the number of full extension field operations that occur in an iteration of the Miller loop, resulting in significant speed ups in most practical situations of between 5 and 30 percent.
Resumo:
ROLE OF LOW AFFINITY β1-ADRENERGIC RECEPTOR IN NORMAL AND DISEASED HEARTS Background: The β1-adrenergic receptor (AR) has at least two binding sites, 1HAR and 1LAR (high and low affinity site of the 1AR respectively) which cause cardiostimulation. Some β-blockers, for example (-)-pindolol and (-)-CGP 12177 can activate β1LAR at higher concentrations than those required to block β1HAR. While β1HAR can be blocked by all clinically used β-blockers, β1LAR is relatively resistant to blockade. Thus, chronic β1LAR activation may occur in the setting of β-blocker therapy, thereby mediating persistent βAR signaling. Thus, it is important to determine the potential significance of β1LAR in vivo, particularly in disease settings. Method and result: C57Bl/6 male mice were used. Chronic (4 weeks) β1LAR activation was achieved by treatment with (-)-CGP12177 via osmotic minipump. Cardiac function was assessed by echocardiography and catheterization. (-)-CGP12177 treatment in healthy mice increased heart rate and left ventricular (LV) contractility without detectable LV remodelling or hypertrophy. In mice subjected to an 8-week period of aorta banding, (-)-CGP12177 treatment given during 4-8 weeks led to a positive inotropic effect. (-)-CGP12177 treatment exacerbated LV remodelling indicated by a worsening of LV hypertrophy by ??% (estimated by weight, wall thickness, cardiomyocyte size) and interstitial/perivascular fibrosis (by histology). Importantly, (-)-CGP12177 treatment to aorta banded mice exacerbated cardiac expression of hypertrophic, fibrogenic and inflammatory genes (all p<0.05 vs. non-treated control with aorta banding).. Conclusion: β1LAR activation provides functional support to the heart, in both normal and diseased (pressure overload) settings. Sustained β1LAR activation in the diseased heart exacerbates LV remodelling and therefore may promote disease progression from compensatory hypertrophy to heart failure. Word count: 270
Resumo:
Ghrelin and obestatin are two peptides associated with appetite control and the regulation of energy balance in adults. It is intuitive that they have an important role in growth and development during puberty. Therefore, it is acknowledged that these peptides, in addition to others, form part of the substrate underlying energy homeostasis which in turn will contribute to body weight regulation and could explain changes in energy balance during puberty. Both peptides originate from the stomach; hence, it is intuitive that they are involved in generating signals from tissue stores which influence food intake. This could be manifested via alterations in the drive to eat (i.e. hunger), eating behaviors and appetite regulation. Furthermore, there is some evidence that these peptides might also be associated with physical activity behaviors and metabolism. Anecdotally, children and adolescents experience behavioral and metabolic changes during growth and development which will be associated with physiological changes.
Resumo:
In public venues, crowd size is a key indicator of crowd safety and stability. In this paper we propose a crowd counting algorithm that uses tracking and local features to count the number of people in each group as represented by a foreground blob segment, so that the total crowd estimate is the sum of the group sizes. Tracking is employed to improve the robustness of the estimate, by analysing the history of each group, including splitting and merging events. A simplified ground truth annotation strategy results in an approach with minimal setup requirements that is highly accurate.
Resumo:
The resource allocation and utilization discourse is dominated by debates about rights particularly individual property rights and ownership. This is due largely to the philosophic foundations provided by Hobbes and Locke and adopted by Bentham. In our community, though, resources come not merely with rights embedded but also obligations. The relevant laws and equitable principles which give shape to our shared rights and obligations with respect to resources take cognizance not merely of the title to the resource (the proprietary right) but the particular context in which the right is exercised. Moral philosophy regarding resource utilisation has from ancient times taken cognizance of obligations but with ascendance of modernity, the agenda of moral philosophy regarding resources, has been dominated, at least since John Locke, by a preoccupation with property rights; the ethical obligations associated with resource management have been largely ignored. The particular social context has also been ignored. Exploring this applied ethical terrain regarding resource utilisation, this thesis: (1) Revisits the justifications for modem property rights (and in that the exclusion of obligations); (2) Identifies major deficiencies in these justifications and reasons for this; (3) Traces the concept of stewardship as understood in classical Greek writing and in the New Testament, and considers its application in the Patristic period and by Medieval and reformist writers, before turning to investigate its influence on legal and equitable concepts through to the current day; 4) Discusses the nature of the stewardship obligation,maps it and offers a schematic for applying the Stewardship Paradigm to problems arising in daily life; and, (5) Discusses the way in which the Stewardship Paradigm may be applied by, and assists in resolving issues arising from within four dominant philosophic world views: (a) Rawls' social contract theory; (b) Utilitarianism as discussed by Peter Singer; (c) Christianity with particular focus on the theology of Douglas Hall; (d) Feminism particularly as expressed in the ethics of care of Carol Gilligan; and, offers some more general comments about stewardship in the context of an ethically plural community.
Resumo:
This thesis is a problematisation of the teaching of art to young children. To problematise a domain of social endeavour, is, in Michel Foucault's terms, to ask how we come to believe that "something ... can and must be thought" (Foucault, 1985:7). The aim is to document what counts (i.e., what is sayable, thinkable, feelable) as proper art teaching in Queensland at this point ofhistorical time. In this sense, the thesis is a departure from more recognisable research on 'more effective' teaching, including critical studies of art teaching and early childhood teaching. It treats 'good teaching' as an effect of moral training made possible through disciplinary discourses organised around certain epistemic rules at a particular place and time. There are four key tasks accomplished within the thesis. The first is to describe an event which is not easily resolved by means of orthodox theories or explanations, either liberal-humanist or critical ones. The second is to indicate how poststructuralist understandings of the self and social practice enable fresh engagements with uneasy pedagogical moments. What follows this discussion is the documentation of an empirical investigation that was made into texts generated by early childhood teachers, artists and parents about what constitutes 'good practice' in art teaching. Twenty-two participants produced text to tell and re-tell the meaning of 'proper' art education, from different subject positions. Rather than attempting to capture 'typical' representations of art education in the early years, a pool of 'exemplary' teachers, artists and parents were chosen, using "purposeful sampling", and from this pool, three videos were filmed and later discussed by the audience of participants. The fourth aspect of the thesis involves developing a means of analysing these texts in such a way as to allow a 're-description' of the field of art teaching by attempting to foreground the epistemic rules through which such teacher-generated texts come to count as true ie, as propriety in art pedagogy. This analysis drew on Donna Haraway's (1995) understanding of 'ironic' categorisation to hold the tensions within the propositions inside the categories of analysis rather than setting these up as discursive oppositions. The analysis is therefore ironic in the sense that Richard Rorty (1989) understands the term to apply to social scientific research. Three 'ironic' categories were argued to inform the discursive construction of 'proper' art teaching. It is argued that a teacher should (a) Teach without teaching; (b) Manufacture the natural; and (c) Train for creativity. These ironic categories work to undo modernist assumptions about theory/practice gaps and finding a 'balance' between oppositional binary terms. They were produced through a discourse theoretical reading of the texts generated by the participants in the study, texts that these same individuals use as a means of discipline and self-training as they work to teach properly. In arguing the usefulness of such approaches to empirical data analysis, the thesis challenges early childhood research in arts education, in relation to its capacity to deal with ambiguity and to acknowledge contradiction in the work of teachers and in their explanations for what they do. It works as a challenge at a range of levels - at the level of theorising, of method and of analysis. In opening up thinking about normalised categories, and questioning traditional Western philosophy and the grand narratives of early childhood art pedagogy, it makes a space for re-thinking art pedagogy as "a game oftruth and error" (Foucault, 1985). In doing so, it opens up a space for thinking how art education might be otherwise.
Resumo:
Earlier studies have shown that the influence of fixation stability on bone healing diminishes with advanced age. The goal of this study was to unravel the relationship between mechanical stimulus and age on callus competence at a tissue level. Using 3D in vitro micro-computed tomography derived metrics, 2D in vivo radiography, and histology, we investigated the influences of age and varying fixation stability on callus size, geometry, microstructure, composition, remodeling, and vascularity. Compared were four groups with a 1.5-mm osteotomy gap in the femora of Sprague–Dawley rats: Young rigid (YR), Young semirigid (YSR), Old rigid (OR), Old semirigid (OSR). Hypothesis was that calcified callus microstructure and composition is impaired due to the influence of advanced age, and these individuals would show a reduced response to fixation stabilities. Semirigid fixations resulted in a larger ΔCSA (Callus cross-sectional area) compared to rigid groups. In vitro μCT analysis at 6 weeks postmortem showed callus bridging scores in younger animals to be superior than their older counterparts (pb0.01). Younger animals showed (i) larger callus strut thickness (pb0.001), (ii) lower perforation in struts (pb0.01), and (iii) higher mineralization of callus struts (pb0.001). Callus mineralization was reduced in young animals with semirigid fracture fixation but remained unaffected in the aged group. While stability had an influence, age showed none on callus size and geometry of callus. With no differences observed in relative osteoid areas in the callus ROI, old as well as semirigid fixated animals showed a higher osteoclast count (pb0.05). Blood vessel density was reduced in animals with semirigid fixation (pb0.05). In conclusion, in vivo monitoring indicated delayed callus maturation in aged individuals. Callus bridging and callus competence (microstructure and mineralization) were impaired in individuals with an advanced age. This matched with increased bone resorption due to higher osteoclast numbers. Varying fixator configurations in older individuals did not alter the dominant effect of advanced age on callus tissue mineralization, unlike in their younger counterparts. Age-associated influences appeared independent from stability. This study illustrates the dominating role of osteoclastic activity in age-related impaired healing, while demonstrating the optimization of fixation parameters such as stiffness appeared to be less effective in influencing healing in aged individuals.
Resumo:
Statistical modeling of traffic crashes has been of interest to researchers for decades. Over the most recent decade many crash models have accounted for extra-variation in crash counts—variation over and above that accounted for by the Poisson density. The extra-variation – or dispersion – is theorized to capture unaccounted for variation in crashes across sites. The majority of studies have assumed fixed dispersion parameters in over-dispersed crash models—tantamount to assuming that unaccounted for variation is proportional to the expected crash count. Miaou and Lord [Miaou, S.P., Lord, D., 2003. Modeling traffic crash-flow relationships for intersections: dispersion parameter, functional form, and Bayes versus empirical Bayes methods. Transport. Res. Rec. 1840, 31–40] challenged the fixed dispersion parameter assumption, and examined various dispersion parameter relationships when modeling urban signalized intersection accidents in Toronto. They suggested that further work is needed to determine the appropriateness of the findings for rural as well as other intersection types, to corroborate their findings, and to explore alternative dispersion functions. This study builds upon the work of Miaou and Lord, with exploration of additional dispersion functions, the use of an independent data set, and presents an opportunity to corroborate their findings. Data from Georgia are used in this study. A Bayesian modeling approach with non-informative priors is adopted, using sampling-based estimation via Markov Chain Monte Carlo (MCMC) and the Gibbs sampler. A total of eight model specifications were developed; four of them employed traffic flows as explanatory factors in mean structure while the remainder of them included geometric factors in addition to major and minor road traffic flows. The models were compared and contrasted using the significance of coefficients, standard deviance, chi-square goodness-of-fit, and deviance information criteria (DIC) statistics. The findings indicate that the modeling of the dispersion parameter, which essentially explains the extra-variance structure, depends greatly on how the mean structure is modeled. In the presence of a well-defined mean function, the extra-variance structure generally becomes insignificant, i.e. the variance structure is a simple function of the mean. It appears that extra-variation is a function of covariates when the mean structure (expected crash count) is poorly specified and suffers from omitted variables. In contrast, when sufficient explanatory variables are used to model the mean (expected crash count), extra-Poisson variation is not significantly related to these variables. If these results are generalizable, they suggest that model specification may be improved by testing extra-variation functions for significance. They also suggest that known influences of expected crash counts are likely to be different than factors that might help to explain unaccounted for variation in crashes across sites
Resumo:
There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros