902 resultados para CUNY-wide IT steering committee
A story worth telling : putting oral history and digital collections online in cultural institutions
Resumo:
Digital platforms in cultural institutions offer exciting opportunities for oral history and digital storytelling that can augment and enrich traditional collections. The way in which cultural institutions allow access to the public is changing dramatically, prompting substantial expansions of their oral history and digital story holdings. In Queensland, Australia, public libraries and museums are becoming innovative hubs of a wide assortment of collections that represent a cross-section of community groups and organisations through the integration of oral history and digital storytelling. The State Library of Queensland (SLQ) features digital stories online to encourage users to explore what the institution has in the catalogue through their website. Now SLQ also offers oral history interviews online, to introduce users to oral history and other components of their collections,- such as photographs and documents to current, as well as new users. This includes the various departments, Indigenous centres and regional libraries affiliated with SLQ statewide, who are often unable to access the materials held within, or even full information about, the collections available within the institution. There has been a growing demand for resources and services that help to satisfy community enthusiasm and promote engagement. Demand increases as public access to affordable digital media technologies increases, and as community or marginalised groups become interested in do it yourself (DIY) history; and SLQ encourages this. This paper draws on the oral history and digital story-based research undertaken by the Queensland University of Technology (QUT) for the State Library of Queensland including: the Apology Collection: The Prime Minister’s apology to Australia’s Indigenous Stolen Generation; Five Senses: regional Queensland artists; Gay history of Brisbane; and The Queensland Business Leaders Hall of Fame.
Resumo:
My research investigates why nouns are learned disproportionately more frequently than other kinds of words during early language acquisition (Gentner, 1982; Gleitman, et al., 2004). This question must be considered in the context of cognitive development in general. Infants have two major streams of environmental information to make meaningful: perceptual and linguistic. Perceptual information flows in from the senses and is processed into symbolic representations by the primitive language of thought (Fodor, 1975). These symbolic representations are then linked to linguistic input to enable language comprehension and ultimately production. Yet, how exactly does perceptual information become conceptualized? Although this question is difficult, there has been progress. One way that children might have an easier job is if they have structures that simplify the data. Thus, if particular sorts of perceptual information could be separated from the mass of input, then it would be easier for children to refer to those specific things when learning words (Spelke, 1990; Pylyshyn, 2003). It would be easier still, if linguistic input was segmented in predictable ways (Gentner, 1982; Gleitman, et al., 2004) Unfortunately the frequency of patterns in lexical or grammatical input cannot explain the cross-cultural and cross-linguistic tendency to favor nouns over verbs and predicates. There are three examples of this failure: 1) a wide variety of nouns are uttered less frequently than a smaller number of verbs and yet are learnt far more easily (Gentner, 1982); 2) word order and morphological transparency offer no insight when you contrast the sentence structures and word inflections of different languages (Slobin, 1973) and 3) particular language teaching behaviors (e.g. pointing at objects and repeating names for them) have little impact on children's tendency to prefer concrete nouns in their first fifty words (Newport, et al., 1977). Although the linguistic solution appears problematic, there has been increasing evidence that the early visual system does indeed segment perceptual information in specific ways before the conscious mind begins to intervene (Pylyshyn, 2003). I argue that nouns are easier to learn because their referents directly connect with innate features of the perceptual faculty. This hypothesis stems from work done on visual indexes by Zenon Pylyshyn (2001, 2003). Pylyshyn argues that the early visual system (the architecture of the "vision module") segments perceptual data into pre-conceptual proto-objects called FINSTs. FINSTs typically correspond to physical things such as Spelke objects (Spelke, 1990). Hence, before conceptualization, visual objects are picked out by the perceptual system demonstratively, like a finger pointing indicating ‘this’ or ‘that’. I suggest that this primitive system of demonstration elaborates on Gareth Evan's (1982) theory of nonconceptual content. Nouns are learnt first because their referents attract demonstrative visual indexes. This theory also explains why infants less often name stationary objects such as plate or table, but do name things that attract the focal attention of the early visual system, i.e., small objects that move, such as ‘dog’ or ‘ball’. This view leaves open the question how blind children learn words for visible objects and why children learn category nouns (e.g. 'dog'), rather than proper nouns (e.g. 'Fido') or higher taxonomic distinctions (e.g. 'animal').
Resumo:
Safety at roadway intersections is of significant interest to transportation professionals due to the large number of intersections in transportation networks, the complexity of traffic movements at these locations that leads to large numbers of conflicts, and the wide variety of geometric and operational features that define them. A variety of collision types including head-on, sideswipe, rear-end, and angle crashes occur at intersections. While intersection crash totals may not reveal a site deficiency, over exposure of a specific crash type may reveal otherwise undetected deficiencies. Thus, there is a need to be able to model the expected frequency of crashes by collision type at intersections to enable the detection of problems and the implementation of effective design strategies and countermeasures. Statistically, it is important to consider modeling collision type frequencies simultaneously to account for the possibility of common unobserved factors affecting crash frequencies across crash types. In this paper, a simultaneous equations model of crash frequencies by collision type is developed and presented using crash data for rural intersections in Georgia. The model estimation results support the notion of the presence of significant common unobserved factors across crash types, although the impact of these factors on parameter estimates is found to be rather modest.
Resumo:
System analysis within the traction power system is vital to the design and operation of an electrified railway. Loads in traction power systems are often characterised by their mobility, wide range of power variations, regeneration and service dependence. In addition, the feeding systems may take different forms in AC electrified railways. Comprehensive system studies are usually carried out by computer simulation. A number of traction power simulators have been available and they allow calculation of electrical interaction among trains and deterministic solutions of the power network. In the paper, a different approach is presented to enable load-flow analysis on various feeding systems and service demands in AC railways by adopting probabilistic techniques. It is intended to provide a different viewpoint to the load condition. Simulation results are given to verify the probabilistic-load-flow models.
Resumo:
Eating is an essential everyday life activity that has fascinated, captivated and defined society since time began. We currently exist in a society where over-consumption of food is an established risk factor chronic disease, the rate of which is increasing alarmingly. 'Food literacy' is an emerging term used to describe what we, as individuals and as a community know and understand about food and how to use it to meet our need, and thus potentially support and empower citizens to make healthy food choices. What exactly the components of food literacy are and how they influence food choice are poorly defined and understood, but increasingly gaining interest among health professionals, policy makers, community workers, educators and members of the public. This paper will build the argument for why concepts of 'food literacy' need to extend beyond existing terms and measures used in the literature to describe the food skills and knowledge needed to make use of public health nutrition messages.
Resumo:
Background: The hedgehog signaling pathway is vital in early development, but then becomes dormant, except in some cancer tumours. Hedgehog inhibitors are being developed for potential use in cancer. Objectives/Methods: The objective of this evaluation is to review the initial clinical studies of the hedgehog inhibitor, GDC-0449, in subjects with cancer. Results: Phase I trials have shown that GDC-0449 has benefits in subjects with metastatic or locally advanced basal-cell carcinoma and in one subjects with medulloblastoma. GDC-0449 was well tolerated. Conclusions: Long term efficacy and safety studies of GDC-0449 in these conditions and other solid cancers are now underway. These clinical trials with GDC-0449, and trials with other hedgehog inhibitors, will reveal whether it is beneficial and safe to inhibit the hedgehog pathway, in a wide range of solid tumours or not.
Resumo:
Continuous biometric authentication schemes (CBAS) are built around the biometrics supplied by user behavioural characteristics and continuously check the identity of the user throughout the session. The current literature for CBAS primarily focuses on the accuracy of the system in order to reduce false alarms. However, these attempts do not consider various issues that might affect practicality in real world applications and continuous authentication scenarios. One of the main issues is that the presented CBAS are based on several samples of training data either of both intruder and valid users or only the valid users' profile. This means that historical profiles for either the legitimate users or possible attackers should be available or collected before prediction time. However, in some cases it is impractical to gain the biometric data of the user in advance (before detection time). Another issue is the variability of the behaviour of the user between the registered profile obtained during enrollment, and the profile from the testing phase. The aim of this paper is to identify the limitations in current CBAS in order to make them more practical for real world applications. Also, the paper discusses a new application for CBAS not requiring any training data either from intruders or from valid users.
Resumo:
Over the years, public health in relation to Australian Aboriginal people has involved many individuals and groups including health professionals, governments, politicians, special interest groups and corporate organisations. Since colonisation commenced until the1980s, public health relating to Aboriginal and Torres Strait Islander people was not necessarily in the best interests of Aboriginal and Torres Strait Islander people, but rather in the interests of the non-Aboriginal population. The attention that was paid focussed more generally around the subject of reproduction and issues of prostitution, exploitation, abuse and venereal diseases (Kidd, 1997). Since the late 1980s there has been a shift in the broader public health agenda (see Baum, 1998) along with public health in relation to Aboriginal and Torres Strait Islander people (NHMRC, 2003). This has been coupled with increasing calls to develop appropriate tertiary curriculum and to educate, train, and employ more Aboriginal and Torres Strait Islander and non-Aboriginal people in public health (Anderson et al., 2004; Genat, 2007; PHERP, 2008a, 2008b). Aboriginal and Torres Strait Islander people have been engaged in public health in ways in which they are in a position to influence the public health agenda (Anderson 2004; 2008; Anderson et al., 2004; NATSIHC, 2003). There have been numerous projects, programs and strategies that have sought to develop the Aboriginal and Torres Strait Islander Public Health workforce (AHMAC, 2002; Oldenburg et al., 2005; SCATSIH, 2002). In recent times the Aboriginal community controlled health sector has joined forces with other peak bodies and governments to find solutions and strategies to improve the health outcomes of Aboriginal and Torres Strait Islander peoples (NACCHO & Oxfam, 2007). This case study chapter will not address these broader activities. Instead it will explore the activities and roles of staff within the Public Health and Research Unit (PHRU) at the Victorian Aboriginal Community Controlled Health Organisation (VACCHO). It will focus on their experiences with education institutions, their work in public health and their thoughts on gaps and where improvements can be made in public health, research and education. What will be demonstrated is the diversity of education qualifications and experience. What will also be reflected is how people work within public health on a daily basis to enact change for equity in health and contribute to the improvement of future health outcomes of the Victorian Aboriginal community.
Resumo:
The Rudd Labour Government rode to power in Australia on the education promise of 'an education revolution'. The term 'education revolution' carries all the obligatory marketing metaphors that an aspirant government might want recognised by the general public on the eve government came to power however in revolutionary terms it fades into insignificance in comparison to the real revolution in Australian education. This revolution simply put is to elevate Indigenous Knowledge Systems, in Australian Universities. In the forty three years since the nation setting Referendum of 1967 a generation has made a beach head on the educational landscape. Now a further generation who having made it into the field of higher degrees yearn for the ways and means to authentically marshal Indigenous knowledge? The Institute of Koorie Education at Deakin has for over twenty years not only witnessed the transition but is also a leader in the field. With the appointment of two Chairs of Indigenous Knowledge Systems to build on to its already established research profile the Institute moved towards what is the 'real revolution' in education – the elevation of Indigenous Knowledge as a legitimate knowledge system. This paper lays out the Institute of Koorie Education‘s Research Plan and the basis of an argument put to the academy that will be the driver for this pursuit.
Resumo:
Before 2001, most Africans immigrating to Australia were white South Africans and Zimbabweans who arrived as economic and family-reunion migrants (Cox, Cooper & Adepoju, 1999). Black African communities are a more recent addition to the Australian landscape, with most entering Australia as refugees after 2001. African refugees are a particularly disadvantaged immigrant group, which the Department of Immigration and Multicultural Affairs (in the Community Relations Commission of New South Wales, 2006) suggests require high levels of settlement support (p.23). Decision makers and settlement service providers need to have settlement data on the communities so that they can be effective in planning, budgeting and delivering support where it is most needed. Settlement data are also useful for determining the challenges that these communities face in trying to establish themselves in resettlement. There has been no verification of existing secondary data sources, however, or previous formal study of African refugee settlement geography in Southeast Queensland. This research addresses the knowledge gap by using a mixed-method approach to identify and describe the distribution and population size of eight African communities in Southeast Queensland, examine secondary migration patterns in these communities and assess the relationship between these geographic features and housing, a critical factor in successful settlement. Significant discrepancies exist between the primary data gathered in the study and existing secondary data relating to population size and distribution of the communities. Results also reveal a tension between the socio-cultural forces and the housing and economic imperatives driving secondary migration in the communities, and a general lack of engagement by African refugees with structured support networks. These findings have a wide range of implications for policy and for groups that provide settlement support to these communities.
Resumo:
With the advances in computer hardware and software development techniques in the past 25 years, digital computer simulation of train movement and traction systems has been widely adopted as a standard computer-aided engineering tool [1] during the design and development stages of existing and new railway systems. Simulators of different approaches and scales are used extensively to investigate various kinds of system studies. Simulation is now proven to be the cheapest means to carry out performance predication and system behaviour characterisation. When computers were first used to study railway systems, they were mainly employed to perform repetitive but time-consuming computational tasks, such as matrix manipulations for power network solution and exhaustive searches for optimal braking trajectories. With only simple high-level programming languages available at the time, full advantage of the computing hardware could not be taken. Hence, structured simulations of the whole railway system were not very common. Most applications focused on isolated parts of the railway system. It is more appropriate to regard those applications as primarily mechanised calculations rather than simulations. However, a railway system consists of a number of subsystems, such as train movement, power supply and traction drives, which inevitably contains many complexities and diversities. These subsystems interact frequently with each other while the trains are moving; and they have their special features in different railway systems. To further complicate the simulation requirements, constraints like track geometry, speed restrictions and friction have to be considered, not to mention possible non-linearities and uncertainties in the system. In order to provide a comprehensive and accurate account of system behaviour through simulation, a large amount of data has to be organised systematically to ensure easy access and efficient representation; the interactions and relationships among the subsystems should be defined explicitly. These requirements call for sophisticated and effective simulation models for each component of the system. The software development techniques available nowadays allow the evolution of such simulation models. Not only can the applicability of the simulators be largely enhanced by advanced software design, maintainability and modularity for easy understanding and further development, and portability for various hardware platforms are also encouraged. The objective of this paper is to review the development of a number of approaches to simulation models. Attention is, in particular, given to models for train movement, power supply systems and traction drives. These models have been successfully used to enable various ‘what-if’ issues to be resolved effectively in a wide range of applications, such as speed profiles, energy consumption, run times etc.
Resumo:
Background, aim, and scope Urban motor vehicle fleets are a major source of particulate matter pollution, especially of ultrafine particles (diameters < 0.1 µm), and exposure to particulate matter has known serious health effects. A considerable body of literature is available on vehicle particle emission factors derived using a wide range of different measurement methods for different particle sizes, conducted in different parts of the world. Therefore the choice as to which are the most suitable particle emission factors to use in transport modelling and health impact assessments presented as a very difficult task. The aim of this study was to derive a comprehensive set of tailpipe particle emission factors for different vehicle and road type combinations, covering the full size range of particles emitted, which are suitable for modelling urban fleet emissions. Materials and methods A large body of data available in the international literature on particle emission factors for motor vehicles derived from measurement studies was compiled and subjected to advanced statistical analysis, to determine the most suitable emission factors to use in modelling urban fleet emissions. Results This analysis resulted in the development of five statistical models which explained 86%, 93%, 87%, 65% and 47% of the variation in published emission factors for particle number, particle volume, PM1, PM2.5 and PM10 respectively. A sixth model for total particle mass was proposed but no significant explanatory variables were identified in the analysis. From the outputs of these statistical models, the most suitable particle emission factors were selected. This selection was based on examination of the statistical robustness of the statistical model outputs, including consideration of conservative average particle emission factors with the lowest standard errors, narrowest 95% confidence intervals and largest sample sizes, and the explanatory model variables, which were Vehicle Type (all particle metrics), Instrumentation (particle number and PM2.5), Road Type (PM10) and Size Range Measured and Speed Limit on the Road (particle volume). Discussion A multiplicity of factors need to be considered in determining emission factors that are suitable for modelling motor vehicle emissions, and this study derived a set of average emission factors suitable for quantifying motor vehicle tailpipe particle emissions in developed countries. Conclusions The comprehensive set of tailpipe particle emission factors presented in this study for different vehicle and road type combinations enable the full size range of particles generated by fleets to be quantified, including ultrafine particles (measured in terms of particle number). These emission factors have particular application for regions which may have a lack of funding to undertake measurements, or insufficient measurement data upon which to derive emission factors for their region. Recommendations and perspectives In urban areas motor vehicles continue to be a major source of particulate matter pollution and of ultrafine particles. It is critical that in order to manage this major pollution source methods are available to quantify the full size range of particles emitted for traffic modelling and health impact assessments.
Resumo:
Visualisation provides a method to efficiently convey and understand the complex nature and processes of groundwater systems. This technique has been applied to the Lockyer Valley to aid in comprehending the current condition of the system. The Lockyer Valley in southeast Queensland hosts intensive irrigated agriculture sourcing groundwater from alluvial aquifers. The valley is around 3000 km2 in area and the alluvial deposits are typically 1-3 km wide and to 20-35 m deep in the main channels, reducing in size in subcatchments. The configuration of the alluvium is of a series of elongate “fingers”. In this roughly circular valley recharge to the alluvial aquifers is largely from seasonal storm events, on the surrounding ranges. The ranges are overlain by basaltic aquifers of Tertiary age, which overall are quite transmissive. Both runoff from these ranges and infiltration into the basalts provided ephemeral flow to the streams of the valley. Throughout the valley there are over 5,000 bores extracting alluvial groundwater, plus lesser numbers extracting from underlying sandstone bedrock. Although there are approximately 2500 monitoring bores, the only regularly monitored area is the formally declared management zone in the lower one third. This zone has a calibrated Modflow model (Durick and Bleakly, 2000); a broader valley Modflow model was developed in 2002 (KBR), but did not have extensive extraction data for detailed calibration. Another Modflow model focused on a central area river confluence (Wilson, 2005) with some local production data and pumping test results. A recent subcatchment simulation model incorporates a network of bores with short-period automated hydrographic measurements (Dvoracek and Cox, 2008). The above simulation models were all based on conceptual hydrogeological models of differing scale and detail.
Resumo:
Establishing the core principals of “entrepreneurial management” within an organization describes a certain strategic choice that affects a company in six dimensions, according to Stevenson (1983). Our aim is to empirically measure entrepreneurial management (it’s existence and degree) and to link this measured strategic choice (for or against) entrepreneurial management with firm performance. Our argument here is that companies that follow core principals of entrepreneurial management should outperform other more administrative firms in certain measures of strategic performance. This paper builds on an empirical investigation published by Brown, Davidson & Wiklund (2001), who have developed and tested a reliable measurement instrument for Stevenson’s definition of “entrepreneurial management” (Stevenson 1983, Stevenson & Jarillo 1990). In the first part of our paper we aim to replicate and to some extent improve this study. In the second part we link the measured degree of “entrepreneurial management” with firm performance. To our knowledge, even so Stevenson’s definition of entrepreneurial management is commonly acknowledged and Brown et al. (2001) developed a reliable instrument to empirically capture this behavioral approach to management, the construct of entrepreneurial management never before has been linked to firm performance in an empirical study. Since most papers on corporate entrepreneurship and firm performance are based on Covin & Slevin’s (1991) or Miller’s (1983) concept of entrepreneurial orientation, we contribute to the literature on corporate entrepreneurship in a novel way, given the fact that the entrepreneurial management dimensions measured in our study can theoretically and empirically be clearly distinguished from the construct of entrepreneurial orientation as defined by Covin & Selvin (1991).
Resumo:
The widespread use of business planning in combination with the mixed theoretical and empirical support for its effect suggest research is needed that takes a deeper into the quality of plans and how they are used. In this study we longitudinally examine use vs. non-use; degree of formalizations; revision of plans, and moderation of planning effects by product novelty,among nascent firms. We relate these to attainment of profitability after 12 months. We find that business planning is negatively related to profitability, but that revising plans is positively related to profitability. Both these effects are stronger under conditions of high product novelty.