963 resultados para variety
Resumo:
Extensive data used to quantify broad soil C changes (without information about causation), coupled with intensive data used for attribution of changes to specific management practices, could form the basis of an efficient national grassland soil C monitoring network. Based on variability of extensive (USDA/NRCS pedon database) and intensive field-level soil C data, we evaluated the efficacy of future sample collection to detect changes in soil C in grasslands. Potential soil C changes at a range of spatial scales related to changes in grassland management can be verified (alpha=0.1) after 5 years with collection of 34, 224, 501 samples at the county, state, or national scales, respectively. Farm-level analysis indicates that equivalent numbers of cores and distinct groups of cores (microplots) results in lowest soil C coefficients of variation for a variety of ecosystems. Our results suggest that grassland soil C changes can be precisely quantified using current technology at scales ranging from farms to the entire nation. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
This doctoral thesis comprises three distinct yet related projects which investigate interdisciplinary practice across: music collaboration; mime performance; and corporate communication. Both the processes and underpinning research of these projects explore, expose and exploit areas where disparate and apparently conflicting fields of professional practice successfully and effectively; intersect, interact, and inform each other - rather than conflict - thereby enhancing each, both individually and collectively. Informed by three decades of professional practice across: music; stage performance; television; corporate communication; design; and tertiary education, the three projects have produced innovative, creative, and commercial viable outcomes, manifest in a variety of media including: music; written text; digital, audio/visual; and internet. In exploring new practice and creating new knowledge, these project outcomes clearly demonstrate the value and effectiveness of reconciling disparate fields of practice through the application of inter-disciplinary creativity and innovation to professional practice.
Resumo:
This paper uses a multivariate analysis to examine how countries‘ tax morale and institutional quality affect the shadow economy. The literature strongly emphasizes the quantitative importance of these factors in understanding the level of and changes in the shadow economy. Newly available data sources offer the unique opportunity to further illuminate a topic that has received increased attention. After controlling for a variety of potential factors, we find strong support that a higher tax morale and a higher institutional quality lead to a smaller shadow economy.
Resumo:
Background / context: The ALTC WIL Scoping Study identified a need to develop innovative assessment methods for work integrated learning (WIL) that encourage reflection and integration of theory and practice within the constraints that result from the level of engagement of workplace supervisors and the ability of academic supervisors to become involved in the workplace. Aims: The aim of this paper is to examine how poster presentations can be used to authentically assess student learning during WIL. Method / Approach: The paper uses a case study approach to evaluate the use of poster presentations for assessment in two internship units at the Queensland University of Technology. The first is a unit in the Faculty of Business where students majoring in advertising, marketing and public relations are placed in a variety of organisations. The second unit is a law unit where students complete placements in government legal offices. Results / Discussion: While poster presentations are commonly used for assessment in the sciences, they are an innovative approach to assessment in the humanities. This paper argues that posters are one way that universities can overcome the substantial challenges of assessing work integrated learning. The two units involved in the case study adopt different approaches to the poster assessment; the Business unit is non-graded and the poster assessment task requires students to reflect on their learning during the internship. The Law unit is graded and requires students to present on a research topic that relates to their internship. In both units the posters were presented during a poster showcase which was attended by students, workplace supervisors and members of faculty. The paper evaluates the benefits of poster presentations for students, workplace supervisors and faculty and proposes some criteria for poster assessment in WIL. Conclusions / Implications: The paper concludes that posters can effectively and authentically assess various learning outcomes in WIL in different disciplines while at the same time offering a means to engage workplace supervisors with academic staff and other students and supervisors participating in the unit. Posters have the ability to demonstrate reflection in learning and are an excellent demonstration of experiential learning and assessing authentically. Keywords: Work integrated learning, assessment, poster presentations, industry engagement.
Resumo:
My research investigates why nouns are learned disproportionately more frequently than other kinds of words during early language acquisition (Gentner, 1982; Gleitman, et al., 2004). This question must be considered in the context of cognitive development in general. Infants have two major streams of environmental information to make meaningful: perceptual and linguistic. Perceptual information flows in from the senses and is processed into symbolic representations by the primitive language of thought (Fodor, 1975). These symbolic representations are then linked to linguistic input to enable language comprehension and ultimately production. Yet, how exactly does perceptual information become conceptualized? Although this question is difficult, there has been progress. One way that children might have an easier job is if they have structures that simplify the data. Thus, if particular sorts of perceptual information could be separated from the mass of input, then it would be easier for children to refer to those specific things when learning words (Spelke, 1990; Pylyshyn, 2003). It would be easier still, if linguistic input was segmented in predictable ways (Gentner, 1982; Gleitman, et al., 2004) Unfortunately the frequency of patterns in lexical or grammatical input cannot explain the cross-cultural and cross-linguistic tendency to favor nouns over verbs and predicates. There are three examples of this failure: 1) a wide variety of nouns are uttered less frequently than a smaller number of verbs and yet are learnt far more easily (Gentner, 1982); 2) word order and morphological transparency offer no insight when you contrast the sentence structures and word inflections of different languages (Slobin, 1973) and 3) particular language teaching behaviors (e.g. pointing at objects and repeating names for them) have little impact on children's tendency to prefer concrete nouns in their first fifty words (Newport, et al., 1977). Although the linguistic solution appears problematic, there has been increasing evidence that the early visual system does indeed segment perceptual information in specific ways before the conscious mind begins to intervene (Pylyshyn, 2003). I argue that nouns are easier to learn because their referents directly connect with innate features of the perceptual faculty. This hypothesis stems from work done on visual indexes by Zenon Pylyshyn (2001, 2003). Pylyshyn argues that the early visual system (the architecture of the "vision module") segments perceptual data into pre-conceptual proto-objects called FINSTs. FINSTs typically correspond to physical things such as Spelke objects (Spelke, 1990). Hence, before conceptualization, visual objects are picked out by the perceptual system demonstratively, like a finger pointing indicating ‘this’ or ‘that’. I suggest that this primitive system of demonstration elaborates on Gareth Evan's (1982) theory of nonconceptual content. Nouns are learnt first because their referents attract demonstrative visual indexes. This theory also explains why infants less often name stationary objects such as plate or table, but do name things that attract the focal attention of the early visual system, i.e., small objects that move, such as ‘dog’ or ‘ball’. This view leaves open the question how blind children learn words for visible objects and why children learn category nouns (e.g. 'dog'), rather than proper nouns (e.g. 'Fido') or higher taxonomic distinctions (e.g. 'animal').
Resumo:
The QUT-NOISE-TIMIT corpus consists of 600 hours of noisy speech sequences designed to enable a thorough evaluation of voice activity detection (VAD) algorithms across a wide variety of common background noise scenarios. In order to construct the final mixed-speech database, a collection of over 10 hours of background noise was conducted across 10 unique locations covering 5 common noise scenarios, to create the QUT-NOISE corpus. This background noise corpus was then mixed with speech events chosen from the TIMIT clean speech corpus over a wide variety of noise lengths, signal-to-noise ratios (SNRs) and active speech proportions to form the mixed-speech QUT-NOISE-TIMIT corpus. The evaluation of five baseline VAD systems on the QUT-NOISE-TIMIT corpus is conducted to validate the data and show that the variety of noise available will allow for better evaluation of VAD systems than existing approaches in the literature.
Resumo:
Crash prediction models are used for a variety of purposes including forecasting the expected future performance of various transportation system segments with similar traits. The influence of intersection features on safety have been examined extensively because intersections experience a relatively large proportion of motor vehicle conflicts and crashes compared to other segments in the transportation system. The effects of left-turn lanes at intersections in particular have seen mixed results in the literature. Some researchers have found that left-turn lanes are beneficial to safety while others have reported detrimental effects on safety. This inconsistency is not surprising given that the installation of left-turn lanes is often endogenous, that is, influenced by crash counts and/or traffic volumes. Endogeneity creates problems in econometric and statistical models and is likely to account for the inconsistencies reported in the literature. This paper reports on a limited-information maximum likelihood (LIML) estimation approach to compensate for endogeneity between left-turn lane presence and angle crashes. The effects of endogeneity are mitigated using the approach, revealing the unbiased effect of left-turn lanes on crash frequency for a dataset of Georgia intersections. The research shows that without accounting for endogeneity, left-turn lanes ‘appear’ to contribute to crashes; however, when endogeneity is accounted for in the model, left-turn lanes reduce angle crash frequencies as expected by engineering judgment. Other endogenous variables may lurk in crash models as well, suggesting that the method may be used to correct simultaneity problems with other variables and in other transportation modeling contexts.
Resumo:
Safety at roadway intersections is of significant interest to transportation professionals due to the large number of intersections in transportation networks, the complexity of traffic movements at these locations that leads to large numbers of conflicts, and the wide variety of geometric and operational features that define them. A variety of collision types including head-on, sideswipe, rear-end, and angle crashes occur at intersections. While intersection crash totals may not reveal a site deficiency, over exposure of a specific crash type may reveal otherwise undetected deficiencies. Thus, there is a need to be able to model the expected frequency of crashes by collision type at intersections to enable the detection of problems and the implementation of effective design strategies and countermeasures. Statistically, it is important to consider modeling collision type frequencies simultaneously to account for the possibility of common unobserved factors affecting crash frequencies across crash types. In this paper, a simultaneous equations model of crash frequencies by collision type is developed and presented using crash data for rural intersections in Georgia. The model estimation results support the notion of the presence of significant common unobserved factors across crash types, although the impact of these factors on parameter estimates is found to be rather modest.
Resumo:
Identifying crash “hotspots”, “blackspots”, “sites with promise”, or “high risk” locations is standard practice in departments of transportation throughout the US. The literature is replete with the development and discussion of statistical methods for hotspot identification (HSID). Theoretical derivations and empirical studies have been used to weigh the benefits of various HSID methods; however, a small number of studies have used controlled experiments to systematically assess various methods. Using experimentally derived simulated data—which are argued to be superior to empirical data, three hot spot identification methods observed in practice are evaluated: simple ranking, confidence interval, and Empirical Bayes. Using simulated data, sites with promise are known a priori, in contrast to empirical data where high risk sites are not known for certain. To conduct the evaluation, properties of observed crash data are used to generate simulated crash frequency distributions at hypothetical sites. A variety of factors is manipulated to simulate a host of ‘real world’ conditions. Various levels of confidence are explored, and false positives (identifying a safe site as high risk) and false negatives (identifying a high risk site as safe) are compared across methods. Finally, the effects of crash history duration in the three HSID approaches are assessed. The results illustrate that the Empirical Bayes technique significantly outperforms ranking and confidence interval techniques (with certain caveats). As found by others, false positives and negatives are inversely related. Three years of crash history appears, in general, to provide an appropriate crash history duration.
Resumo:
National estimates of the prevalence of child abuse-related injuries are obtained from a variety of sectors including welfare, justice, and health resulting in inconsistent estimates across sectors. The International Classification of Diseases (ICD) is used as the international standard for categorising health data and aggregating data for statistical purposes, though there has been limited validation of the quality, completeness or concordance of these data with other sectors. This research study examined the quality of documentation and coding of child abuse recorded in hospital records in Queensland and the concordance of these data with child welfare records. A retrospective medical record review was used to examine the clinical documentation of over 1000 hospitalised injured children from 20 hospitals in Queensland. A data linkage methodology was used to link these records with records in the child welfare database. Cases were sampled from three sub-groups according to the presence of target ICD codes: Definite abuse, Possible abuse, unintentional injury. Less than 2% of cases coded as being unintentional were recoded after review as being possible abuse, and only 5% of cases coded as possible abuse cases were reclassified as unintentional, though there was greater variation in the classification of cases as definite abuse compared to possible abuse. Concordance of health data with child welfare data varied across patient subgroups. This study will inform the development of strategies to improve the quality, consistency and concordance of information between health and welfare agencies to ensure adequate system responses to children at risk of abuse.
Resumo:
Most infrastructure project developments are complex in nature, particularly in the planning phase. During this stage, many vague alternatives are tabled - from the strategic to operational level. Human judgement and decision making are characterised by biases, errors and the use of heuristics. These factors are intangible and hard to measure because they are subjective and qualitative in nature. The problem with human judgement becomes more complex when a group of people are involved. The variety of different stakeholders may cause conflict due to differences in personal judgements. Hence, the available alternatives increase the complexities of the decision making process. Therefore, it is desirable to find ways of enhancing the efficiency of decision making to avoid misunderstandings and conflict within organisations. As a result, numerous attempts have been made to solve problems in this area by leveraging technologies such as decision support systems. However, most construction project management decision support systems only concentrate on model development and neglect fundamentals of computing such as requirement engineering, data communication, data management and human centred computing. Thus, decision support systems are complicated and are less efficient in supporting the decision making of project team members. It is desirable for decision support systems to be simpler, to provide a better collaborative platform, to allow for efficient data manipulation, and to adequately reflect user needs. In this chapter, a framework for a more desirable decision support system environment is presented. Some key issues related to decision support system implementation are also described.
Resumo:
Many infrastructure and necessity systems such as electricity and telecommunication in Europe and the Northern America were used to be operated as monopolies, if not state-owned. However, they have now been disintegrated into a group of smaller companies managed by different stakeholders. Railways are no exceptions. Since the early 1980s, there have been reforms in the shape of restructuring of the national railways in different parts of the world. Continuous refinements are still conducted to allow better utilisation of railway resources and quality of service. There has been a growing interest for the industry to understand the impacts of these reforms on the operation efficiency and constraints. A number of post-evaluations have been conducted by analysing the performance of the stakeholders on their profits (Crompton and Jupe 2003), quality of train service (Shaw 2001) and engineering operations (Watson 2001). Results from these studies are valuable for future improvement in the system, followed by a new cycle of post-evaluations. However, direct implementation of these changes is often costly and the consequences take a long period of time (e.g. years) to surface. With the advance of fast computing technologies, computer simulation is a cost-effective means to evaluate a hypothetical change in a system prior to actual implementation. For example, simulation suites have been developed to study a variety of traffic control strategies according to sophisticated models of train dynamics, traction and power systems (Goodman, Siu and Ho 1998, Ho and Yeung 2001). Unfortunately, under the restructured railway environment, it is by no means easy to model the complex behaviour of the stakeholders and the interactions between them. Multi-agent system (MAS) is a recently developed modelling technique which may be useful in assisting the railway industry to conduct simulations on the restructured railway system. In MAS, a real-world entity is modelled as a software agent that is autonomous, reactive to changes, able to initiate proactive actions and social communicative acts. It has been applied in the areas of supply-chain management processes (García-Flores, Wang and Goltz 2000, Jennings et al. 2000a, b) and e-commerce activities (Au, Ngai and Parameswaran 2003, Liu and You 2003), in which the objectives and behaviour of the buyers and sellers are captured by software agents. It is therefore beneficial to investigate the suitability or feasibility of applying agent modelling in railways and the extent to which it might help in developing better resource management strategies. This paper sets out to examine the benefits of using MAS to model the resource management process in railways. Section 2 first describes the business environment after the railway 2 Modelling issues on the railway resource management process using MAS reforms. Then the problems emerge from the restructuring process are identified in section 3. Section 4 describes the realisation of a MAS for railway resource management under the restructured scheme and the feasible studies expected from the model.
Resumo:
Voice recognition is one of the key enablers to reduce driver distraction as in-vehicle systems become more and more complex. With the integration of voice recognition in vehicles, safety and usability are improved as the driver’s eyes and hands are not required to operate system controls. Whilst speaker independent voice recognition is well developed, performance in high noise environments (e.g. vehicles) is still limited. La Trobe University and Queensland University of Technology have developed a low-cost hardware-based speech enhancement system for automotive environments based on spectral subtraction and delay–sum beamforming techniques. The enhancement algorithms have been optimised using authentic Australian English collected under typical driving conditions. Performance tests conducted using speech data collected under variety of vehicle noise conditions demonstrate a word recognition rate improvement in the order of 10% or more under the noisiest conditions. Currently developed to a proof of concept stage there is potential for even greater performance improvement.
Resumo:
Rapidly developing information and telecommunication technologies and their platforms in the late 20th Century helped improve urban infrastructure management and influenced quality of life. Telecommunication technologies make it possible for people to deliver text, audio and video material using wired, wireless or fibre-optic networks. Technologies convergence amongst these digital devices continues to create new ways in which the information and telecommunication technologies are used. The 21st Century is an era where information has converged, in which people are able to access a variety of services, including internet and location based services, through multi-functional devices such as mobile phones. This chapter discusses the recent developments in telecommunication networks and trends in convergence technologies, their implications for urban infrastructure planning, and for the quality of life of urban residents.
Resumo:
Efficient and effective urban management systems for Ubiquitous Eco Cities require having intelligent and integrated management mechanisms. This integration includes bringing together economic, socio-cultural and urban development with a well orchestrated, transparent and open decision-making system and necessary infrastructure and technologies. In Ubiquitous Eco Cities telecommunication technologies play an important role in monitoring and managing activities via wired and wireless networks. Particularly, technology convergence creates new ways in which information and telecommunication technologies are used and formed the backbone of urban management. The 21st Century is an era where information has converged, in which people are able to access a variety of services, including internet and location based services, through multi-functional devices and provides new opportunities in the management of Ubiquitous Eco Cities. This chapter discusses developments in telecommunication infrastructure and trends in convergence technologies and their implications on the management of Ubiquitous Eco Cities