678 resultados para PSC-CUNY Grants
Resumo:
Background: Previous studies have found high temperatures increase the risk of mortality in summer. However, little is known about whether a sharp decrease or increase in temperature between neighbouring days has any effect on mortality. Method: Poisson regression models were used to estimate the association between temperature change and mortality in summer in Brisbane, Australia during 1996–2004 and Los Angeles, United States during 1987–2000. The temperature change was calculated as the current day’s mean temperature minus the previous day’s mean. Results: In Brisbane, a drop of more than 3 °C in temperature between days was associated with relative risks (RRs) of 1.157 (95% confidence interval (CI): 1.024, 1.307) for total non external mortality (NEM), 1.186 (95%CI: 1.002, 1.405) for NEM in females, and 1.442 (95%CI: 1.099, 1.892) for people aged 65–74 years. An increase of more than 3 °C was associated with RRs of 1.353 (95%CI: 1.033, 1.772) for cardiovascular mortality and 1.667 (95%CI: 1.146, 2.425) for people aged < 65 years. In Los Angeles, only a drop of more than 3 °C was significantly associated with RRs of 1.133 (95%CI: 1.053, 1.219) for total NEM, 1.252 (95%CI: 1.131, 1.386) for cardiovascular mortality, and 1.254 (95%CI: 1.135, 1.385) for people aged ≥75 years. In both cities, there were joint effects of temperature change and mean temperature on NEM. Conclusion : A significant change in temperature of more than 3 °C, whether positive or negative, has an adverse impact on mortality even after controlling for the current temperature.
Resumo:
Recently, user tagging systems have grown in popularity on the web. The tagging process is quite simple for ordinary users, which contributes to its popularity. However, free vocabulary has lack of standardization and semantic ambiguity. It is possible to capture the semantics from user tagging into some form of ontology, but the application of the resulted ontology for recommendation making has not been that flourishing. In this paper we discuss our approach to learn domain ontology from user tagging information and apply the extracted tag ontology in a pilot tag recommendation experiment. The initial result shows that by using the tag ontology to re-rank the recommended tags, the accuracy of the tag recommendation can be improved.
Resumo:
In this chapter we describe a history of collaboration between university-based literacy researchers and school-based teachers in teacher development programs and practitioner inquiries designed to improve literacy outcomes for students living in low-socioeconomic circumstances. We consider how an inquiry stance has informed teachers working for social justice through curriculum and pedagogy designed to connect children’s developing literacy repertoires with their changing material, social and linguistic contexts. We use examples from the practices of two of our long-term teacher-collaborators to show what has been possible to achieve, even in radically different policy contexts, because of teachers’ continued commitment to themes of place and belonging, and language and identity.
The transition to school of children with developmental disabilities : views of parents and teachers
Resumo:
The transition from early intervention programs to inclusive school settings presents children with developmental disabilities with a range of social challenges. In Queensland, in the year of transition to school, many children with developmental disabilities attend an Early Childhood Development Program for 2 to 3 days each week and also begin attendance in a mainstream program with the latter increasing to full-time attendance during the year. Quantitative and qualitative data were collected by parent interviews and teacher questionnaires for 62 children participating in the Transition to School Project regarding their perceptions of the success of the transition process and the benefits and challenges of inclusion. Both parents and teachers saw a range of benefits to children from their inclusion in ‘regular’ classrooms, with parents noting the helpfulness of teachers and their support for inclusion. Challenges noted by parents included the school’s lack of preparation for their child’s particular developmental needs especially in terms of the physical environment while teachers reported challenges meeting the needs of these children within the context and resources of the classroom. Parents were more likely than teachers to view the transition as easy. Correlational analyses indicated that teachers were more likely to view the transition as easy when they felt that the child was appropriately placed in a ‘regular’ classroom. Findings from this project can inform the development of effective transition-to-school programs in the early school years for children with developmental disabilities.
Resumo:
CCTV and surveillance networks are increasingly being used for operational as well as security tasks. One emerging area of technology that lends itself to operational analytics is soft biometrics. Soft biometrics can be used to describe a person and detect them throughout a sparse multi-camera network. This enables them to be used to perform tasks such as determining the time taken to get from point to point, and the paths taken through an environment by detecting and matching people across disjoint views. However, in a busy environment where there are 100's if not 1000's of people such as an airport, attempting to monitor everyone is highly unrealistic. In this paper we propose an average soft biometric, that can be used to identity people who look distinct, and are thus suitable for monitoring through a large, sparse camera network. We demonstrate how an average soft biometric can be used to identify unique people to calculate operational measures such as the time taken to travel from point to point.
Resumo:
Unusual event detection in crowded scenes remains challenging because of the diversity of events and noise. In this paper, we present a novel approach for unusual event detection via sparse reconstruction of dynamic textures over an overcomplete basis set, with the dynamic texture described by local binary patterns from three orthogonal planes (LBPTOP). The overcomplete basis set is learnt from the training data where only the normal items observed. In the detection process, given a new observation, we compute the sparse coefficients using the Dantzig Selector algorithm which was proposed in the literature of compressed sensing. Then the reconstruction errors are computed, based on which we detect the abnormal items. Our application can be used to detect both local and global abnormal events. We evaluate our algorithm on UCSD Abnormality Datasets for local anomaly detection, which is shown to outperform current state-of-the-art approaches, and we also get promising results for rapid escape detection using the PETS2009 dataset.
Resumo:
This paper describes a scene invariant crowd counting algorithm that uses local features to monitor crowd size. Unlike previous algorithms that require each camera to be trained separately, the proposed method uses camera calibration to scale between viewpoints, allowing a system to be trained and tested on different scenes. A pre-trained system could therefore be used as a turn-key solution for crowd counting across a wide range of environments. The use of local features allows the proposed algorithm to calculate local occupancy statistics, and Gaussian process regression is used to scale to conditions which are unseen in the training data, also providing confidence intervals for the crowd size estimate. A new crowd counting database is introduced to the computer vision community to enable a wider evaluation over multiple scenes, and the proposed algorithm is tested on seven datasets to demonstrate scene invariance and high accuracy. To the authors' knowledge this is the first system of its kind due to its ability to scale between different scenes and viewpoints.
Resumo:
Modelling how a word is activated in human memory is an important requirement for determining the probability of recall of a word in an extra-list cueing experiment. The spreading activation, spooky-action-at-a-distance and entanglement models have all been used to model the activation of a word. Recently a hypothesis was put forward that the mean activation levels of the respective models are as follows: Spreading � Entanglment � Spooking-action-at-a-distance This article investigates this hypothesis by means of a substantial empirical analysis of each model using the University of South Florida word association, rhyme and word norms.
Resumo:
This paper adopts an epistemic community framework to explicate the dual role of epistemic communities as influencers of accounting policy within regulatory space and as implementers who effect change within the domain of accounting. The context is the adoption and implementation of fair value accounting within local government in New South Wales (NSW). The roles and functions of Australian local government are extensive, and include the development and maintenance of infrastructure, provision of recreational facilities, certain health and community services, buildings, cultural facilities, and in some cases, water and sewerage (Australian Local Government Association, 2009). The NSW state Department of Local Government (DLG) is responsible for legislation and policy development to ensure that local councils are able to deliver ‘quality services to their communities in a sustainable manner’ (DLG, 2008c). These local councils receive revenue from various sources including property rates, government grants and user-pays service provision. In July 2006 the DLG issued Circular 06-453 to councils (DLG, 2006c), mandating the staged adoption of fair value measurement of infrastructure assets. This directive followed the policy of NSW State Treasury (NSW Treasury, 2007),4 and an independent inquiry into the financial sustainability of local councils (LGSA, 2006). It was an attempt to resolve the inconsistency in public sector asset valuation in NSW Local Governments, and to provide greater usefulness and comparability of financial statements.5 The focus of this study is the mobilization of accounting change by the DLG within this wider political context. When a regulatory problem arises, those with political power seek advice from professionals with relevant skill and expertise (Potter, 2005). This paper explores the way in which professionals diffuse accounting ‘problems’ and the associated accounting solutions ‘across time and space’ (Potter, 2005, p. 277). The DLG’s fair value accounting policy emanated from a ‘regulatory space’ (Hancher and Moran, 1989)6 as a result of negotiations between many parties, including accounting and finance professionals. Operating within the local government sector, these professionals were identified by the DLG as being capable of providing helpful input. They were also responsible for the implementation of the new olicy within local councils. Accordingly they have been dentified as an pistemic community with the ability to ranslate regulatory power by changing he domain of ccounting (Potter, 2005, p. 278).7 The paper is organised as follows. The background to the LG’s decision to require the introduction of fair value accounting for infrastructure assets is explored. Following this, the method of the study is described, and the epistemic community framework outlined. In the next sections, evidence of the influencing and implementing roles of epistemic groups is provided. Finally, conclusions are drawn about the significance of these groups both within regulatory space in developing accounting regulation, and in embedding change within the domain of accounting.
Resumo:
Quantum dot - plasmon waveguide systems are of interest for the active control of plasmon propagation, and consequently, the development of active nanophotonic devices such as nano-sized optical transistors. This paper is concerned with how varying aspect ratio of the waveguide crosssection affects the quantum dot - plasmon coupling. We compare a stripe waveguide with an equivalent nanowire, illustrating that both waveguides have a similar coupling strength to a nearby quantum dot for small waveguide cross-section, thereby indicating that stripe lithographic waveguides have strong potential use in quantum dot –plasmon waveguide systems. We also demonstrate that changing the aspect ratio of both stripe and wire waveguides can increase the spontaneous emission rate of the quantum dot into the plasmon mode, by up to a factor of five. The results of this paper will contribute to the optimisation of quantum dot - plasmon waveguide systems and help pave the way for the development of active nanophotonics devices.
Resumo:
This paper proposes the use of eigenvoice modeling techniques with the Cross Likelihood Ratio (CLR) as a criterion for speaker clustering within a speaker diarization system. The CLR has previously been shown to be a robust decision criterion for speaker clustering using Gaussian Mixture Models. Recently, eigenvoice modeling techniques have become increasingly popular, due to its ability to adequately represent a speaker based on sparse training data, as well as an improved capture of differences in speaker characteristics. This paper hence proposes that it would be beneficial to capitalize on the advantages of eigenvoice modeling in a CLR framework. Results obtained on the 2002 Rich Transcription (RT-02) Evaluation dataset show an improved clustering performance, resulting in a 35.1% relative improvement in the overall Diarization Error Rate (DER) compared to the baseline system.
Resumo:
Probabilistic topic models have recently been used for activity analysis in video processing, due to their strong capacity to model both local activities and interactions in crowded scenes. In those applications, a video sequence is divided into a collection of uniform non-overlaping video clips, and the high dimensional continuous inputs are quantized into a bag of discrete visual words. The hard division of video clips, and hard assignment of visual words leads to problems when an activity is split over multiple clips, or the most appropriate visual word for quantization is unclear. In this paper, we propose a novel algorithm, which makes use of a soft histogram technique to compensate for the loss of information in the quantization process; and a soft cut technique in the temporal domain to overcome problems caused by separating an activity into two video clips. In the detection process, we also apply a soft decision strategy to detect unusual events.We show that the proposed soft decision approach outperforms its hard decision counterpart in both local and global activity modelling.
Resumo:
Modelling events in densely crowded environments remains challenging, due to the diversity of events and the noise in the scene. We propose a novel approach for anomalous event detection in crowded scenes using dynamic textures described by the Local Binary Patterns from Three Orthogonal Planes (LBP-TOP) descriptor. The scene is divided into spatio-temporal patches where LBP-TOP based dynamic textures are extracted. We apply hierarchical Bayesian models to detect the patches containing unusual events. Our method is an unsupervised approach, and it does not rely on object tracking or background subtraction. We show that our approach outperforms existing state of the art algorithms for anomalous event detection in UCSD dataset.
Resumo:
Twitter has become a major instrument for the rapid dissemination and subsequent debate of news stories. It has been instrumental both in drawing attention to events as they unfolded (such as the emergency landing of a plane in New York’s Hudson River in 2009) and in facilitating a sustained discussion of major stories over timeframes measured in weeks and months (including the continuing saga around Wikileaks and Julian Assange), sometimes still keeping stories alive even if mainstream media attention has moved on elsewhere. More comprehensive methodologies for research into news discussion on Twitter – beyond anecdotal or case study approaches – are only now beginning to emerge. This paper presents a large-scale quantitative approach to studying public communication in the Australian Twittersphere, developed as part of a three-year ARC Discovery project that also examines blogs and other social media spaces. The paper will both outline the innovative research tools developed for this work, and present outcomes from an application of these methodologies to recent and present news themes. Our methodology enables us to identify major themes in Twitter’s discussion of these events, trace their development and decline over time, and map the dynamics of the discussion networks formed ad hoc around specific themes (in part with the help of Twitter #hashtags: brief identifiers which mark a tweet as taking part in an established discussion). It is also able to identify links to major news stories and other online resources, and to track their dissemination across the wider Twittersphere.
Resumo:
Objectives: To quantify randomness and cost when choosing health and medical research projects for funding. Design: Analysis of retrospective data from grant review panels. Setting: The National Health & Medical Research Council of Australia. Participants/Data: All panel members’ scores for grant proposals submitted in 2009. Main outcome measure: The proportion of grant proposals that were always, sometimes and never funded after accounting for random variability arising from variation in panel members’ scores; the cost-effectiveness of different size assessment panels. Results: 59% of 620 funded grants were sometimes not funded when random variability was accounted for. Only 9% of grant proposals were always funded, 61% were never funded and 29% were sometimes funded. The extra cost per grant effectively funded from the most effective system was $18,541. Conclusions: Allocating funding for scientific research in health and medicine is costly and somewhat random. There are many useful research questions to be addressed that could improve current processes.