464 resultados para text analytic approaches
Resumo:
Debates over the legitimacy and legality of prostitution have characterised human trafficking discourse for the last two decades. This article identifies the extent to which competing perspectives concerning the legitimacy of prostitution have influenced anti-trafficking policy in Australia and the United States, and argues that each nation-state’s approach to domestic sex work has influenced trafficking legislation. The legal status of prostitution in each country, and feminist influences on prostitution law reform, have had a significant impact on the nature of the legislation adopted.
Resumo:
Employment on the basis of merit is the foundation of Australia’s equal opportunity legislation, beginning with the Affirmative Action (Equal Opportunity for Women) Act 1986, and continuing through the Equal Opportunity for Women in the Workplace Act 1999 to the Workplace Gender Equality Act 2012, all of which require organisations with more than 100 employees to produce an organisational program promoting employment equity for women (WGEA 2014a; Strachan, Burgess & Henderson 2007). The issue of merit was seen as critically important to the objectives of the original 1986 Act and the Affirmative Action Agency produced two monographs in 1988 written by Clare Burton: Redefining Merit (Burton 1988a) and Gender Bias in Job Evaluation (Burton 1988b) which provided practical advice. Added to this, in 1987 the Australian Government Publishing Service published Women’s Worth: Pay Equity and Job Evaluation in Australia (Burton, Hag & Thompson 1987). The equity programs set up under the 1986 legislation aimed to ‘eliminate discriminatory employment practices and to promote equal employment opportunities for women’ and this was ‘usually understood to mean that the merit principle forms the basis of appointment to positions and for promotion’ (Burton 1988a, p. 1).
Resumo:
This paper discusses three different ways of applying the single-objective binary genetic algorithm into designing the wind farm. The introduction of different applications is through altering the binary encoding methods in GA codes. The first encoding method is the traditional one with fixed wind turbine positions. The second involves varying the initial positions from results of the first method, and it is achieved by using binary digits to represent the coordination of wind turbine on X or Y axis. The third is the mixing of the first encoding method with another one, which is by adding four more binary digits to represent one of the unavailable plots. The goal of this paper is to demonstrate how the single-objective binary algorithm can be applied and how the wind turbines are distributed under various conditions with best fitness. The main emphasis of discussion is focused on the scenario of wind direction varying from 0° to 45°. Results show that choosing the appropriate position of wind turbines is more significant than choosing the wind turbine numbers, considering that the former has a bigger influence on the whole farm fitness than the latter. And the farm has best performance of fitness values, farm efficiency, and total power with the direction between 20°to 30°.
Resumo:
This research used a case study approach to examine curriculum understandings and the processes of curriculum development at a Vietnamese university. The study proposes a participatory model for curriculum development contextualized for Vietnamese higher education. The study found that the curriculum is understood in diverse and sometimes conflicting ways by students, academics and administrative staff, and is developed in a hierarchical manner. Hence, the participatory model incorporates recommendations for effective practices of curriculum development at different levels within Vietnamese universities.
Resumo:
This paper proposes the addition of a weighted median Fisher discriminator (WMFD) projection prior to length-normalised Gaussian probabilistic linear discriminant analysis (GPLDA) modelling in order to compensate the additional session variation. In limited microphone data conditions, a linear-weighted approach is introduced to increase the influence of microphone speech dataset. The linear-weighted WMFD-projected GPLDA system shows improvements in EER and DCF values over the pooled LDA- and WMFD-projected GPLDA systems in inter-view-interview condition as WMFD projection extracts more speaker discriminant information with limited number of sessions/ speaker data, and linear-weighted GPLDA approach estimates reliable model parameters with limited microphone data.
Resumo:
Big Tobacco has been engaged in a dark, shadowy plot and conspiracy to hijack the Trans-Pacific Partnership Agreement (TPP) and undermine tobacco control measures – such as graphic health warnings and the plain packaging of tobacco products... In the context of this heavy lobbying by Big Tobacco and its proxies, this chapter provides an analysis of the debate over trade, tobacco, and the TPP. This discussion is necessarily focused on the negotiations of the free trade agreement – the shadowy conflicts before the finalisation of the text. This chapter contends that the trade negotiations threaten hard-won gains in public health – including international developments such as the WHO Framework Convention on Tobacco Control, and domestic measures, such as graphic health warnings and the plain packaging of tobacco products. It maintains that there is a need for regional trade agreements to respect the primacy of the WHO Framework Convention on Tobacco Control. There is a need both to provide for an open and transparent process regarding such trade negotiations, as well as a due and proper respect for public health in terms of substantive obligations. Part I focuses on the debate over the intellectual property chapter of the TPP, within the broader context of domestic litigation against Australia’s plain tobacco packaging regime and associated WTO disputes. Part II examines the investment chapter of the TPP, taking account of ongoing investment disputes concerning tobacco control and the declared approaches of Australia and New Zealand to investor-state dispute settlement. Part III looks at the discussion as to whether there should be specific text on tobacco control in the TPP, and, if so, what should be its nature and content. This chapter concludes that the plain packaging of tobacco products – and other best practices in tobacco control – should be adopted by members of the Pacific Rim.
Resumo:
This report identifies the outcomes of a program evaluation of the five year Workplace Health and Safety Strategy (2012-2017), specifically, the engagement component within the Queensland Ambulance Service. As part of the former Department of Community Safety, their objective was to work towards harmonising the occupational health and safety policies and process to improve the workplace culture. The report examines and assess the process paths and resource inputs into the strategy, provides feedback on progress to achieving identified goals as well as identify opportunities for improvements and barriers to progress. Consultations were held with key stakeholders within QAS and focus groups were facilitated with managers and health and safety representatives of each Local Area Service Network.
Resumo:
Combining datasets across independent studies can boost statistical power by increasing the numbers of observations and can achieve more accurate estimates of effect sizes. This is especially important for genetic studies where a large number of observations are required to obtain sufficient power to detect and replicate genetic effects. There is a need to develop and evaluate methods for joint-analytical analyses of rich datasets collected in imaging genetics studies. The ENIGMA-DTI consortium is developing and evaluating approaches for obtaining pooled estimates of heritability through meta-and mega-genetic analytical approaches, to estimate the general additive genetic contributions to the intersubject variance in fractional anisotropy (FA) measured from diffusion tensor imaging (DTI). We used the ENIGMA-DTI data harmonization protocol for uniform processing of DTI data from multiple sites. We evaluated this protocol in five family-based cohorts providing data from a total of 2248 children and adults (ages: 9-85) collected with various imaging protocols. We used the imaging genetics analysis tool, SOLAR-Eclipse, to combine twin and family data from Dutch, Australian and Mexican-American cohorts into one large "mega-family". We showed that heritability estimates may vary from one cohort to another. We used two meta-analytical (the sample-size and standard-error weighted) approaches and a mega-genetic analysis to calculate heritability estimates across-population. We performed leave-one-out analysis of the joint estimates of heritability, removing a different cohort each time to understand the estimate variability. Overall, meta- and mega-genetic analyses of heritability produced robust estimates of heritability.
Resumo:
In this paper we present a robust method to detect handwritten text from unconstrained drawings on normal whiteboards. Unlike printed text on documents, free form handwritten text has no pattern in terms of size, orientation and font and it is often mixed with other drawings such as lines and shapes. Unlike handwritings on paper, handwritings on a normal whiteboard cannot be scanned so the detection has to be based on photos. Our work traces straight edges on photos of the whiteboard and builds graph representation of connected components. We use geometric properties such as edge density, graph density, aspect ratio and neighborhood similarity to differentiate handwritten text from other drawings. The experiment results show that our method achieves satisfactory precision and recall. Furthermore, the method is robust and efficient enough to be deployed in a mobile device. This is an important enabler of business applications that support whiteboard-centric visual meetings in enterprise scenarios. © 2012 IEEE.
Resumo:
Assessing students’ conceptual understanding of technical content is important for instructors as well as students to learn content and apply knowledge in various contexts. Concept inventories that identify possible misconceptions through validated multiple-choice questions are helpful in identifying a misconception that may exist, but do not provide a meaningful assessment of why they exist or the nature of the students’ understanding. We conducted a case study with undergraduate students in an electrical engineering course by testing a validated multiple-choice response concept inventory that we augmented with a component for students to provide written explanations for their multiple-choice selection. Results revealed that correctly chosen multiple-choice selections did not always match correct conceptual understanding for question testing a specific concept. The addition of a text-response to multiple-choice concept inventory questions provided an enhanced and meaningful assessment of students’ conceptual understanding and highlighted variables associated with current concept inventories or multiple choice questions.
Resumo:
Designing a school library is a complex, costly and demanding process with important educational and social implications for the whole school community. Drawing upon recent research, this paper presents contrasting snapshots of two school libraries to demonstrate the impacts of greater and lesser collaboration in the designing process. After a brief literature review, the paper outlines the research design (qualitative case study, involving collection and inductive thematic analysis of interview data and student drawings). The select findings highlight the varying experiences of each school’s teacher-librarian through the four designing phases of imagining, transitioning, experiencing and reimagining. Based on the study’s findings, the paper concludes that design outcomes are enhanced through collaboration between professional designers and key school stakeholders including teacher-librarians, teachers, principals and students. The findings and recommendations are of potential interest to teacher-librarians, school principals, education authorities, information professionals and library managers, to guide user-centred library planning and resourcing.
Resumo:
The need for better and more accurate assessments of testamentary and decision-making capacity grows as Australian society ages and incidences of mentally disabling conditions increase. Capacity is a legal determination, but one on which medical opinion is increasingly being sought. The difficulties inherent within capacity assessments are exacerbated by the ad hoc approaches adopted by legal and medical professionals based on individual knowledge and skill, as well as the numerous assessment paradigms that exist. This can negatively affect the quality of assessments, and results in confusion as to the best way to assess capacity. This article begins by assessing the nature of capacity. The most common general assessment models used in Australia are then discussed, as are the practical challenges associated with capacity assessment. The article concludes by suggesting a way forward to satisfactorily assess legal capacity given the significant ramifications of getting it wrong.
Resumo:
The latest generation of Deep Convolutional Neural Networks (DCNN) have dramatically advanced challenging computer vision tasks, especially in object detection and object classification, achieving state-of-the-art performance in several computer vision tasks including text recognition, sign recognition, face recognition and scene understanding. The depth of these supervised networks has enabled learning deeper and hierarchical representation of features. In parallel, unsupervised deep learning such as Convolutional Deep Belief Network (CDBN) has also achieved state-of-the-art in many computer vision tasks. However, there is very limited research on jointly exploiting the strength of these two approaches. In this paper, we investigate the learning capability of both methods. We compare the output of individual layers and show that many learnt filters and outputs of the corresponding level layer are almost similar for both approaches. Stacking the DCNN on top of unsupervised layers or replacing layers in the DCNN with the corresponding learnt layers in the CDBN can improve the recognition/classification accuracy and training computational expense. We demonstrate the validity of the proposal on ImageNet dataset.
Resumo:
Recently, the debate around critical literacy has dissipated as literacy education agendas and attendant policies shift to embrace more hybrid approaches to the teaching of senior English. This paper reports on orientations towards critical literacy as expressed by four teachers of senior English who teach culturally and linguistically diverse learners. Teachers’ understandings of critical literacy are important given the emphasis on Critical and Creative Thinking as well as Literacy as General Capabilities underpinning the Australian Curriculum. Using critical discourse analysis and Janks' (2010) Synthesis Model of Critical Literacy, interview and classroom data from four teachers of English as an Additional Language or Dialect (EAL/D) learners in two high schools were analysed for the ways these teachers constructed critical literacy in their talk and practice. While all four teachers indicated significant commitment to critical literacy as an approach to English language teaching, their understandings varied. These ranged from providing access to powerful genres, to rationalist approaches to interrogating text, with less emphasis on multimodal design and drawing on learner diversity. This has significant implications for what kind of learning is being offered to EAL/D learners in the name of English teaching, for syllabus design, and for teacher professional development.
Resumo:
Modularity has been suggested to be connected to evolvability because a higher degree of independence among parts allows them to evolve as separate units. Recently, the Escoufier RV coefficient has been proposed as a measure of the degree of integration between modules in multivariate morphometric datasets. However, it has been shown, using randomly simulated datasets, that the value of the RV coefficient depends on sample size. Also, so far there is no statistical test for the difference in the RV coefficient between a priori defined groups of observations. Here, we (1), using a rarefaction analysis, show that the value of the RV coefficient depends on sample size also in real geometric morphometric datasets; (2) propose a permutation procedure to test for the difference in the RV coefficient between a priori defined groups of observations; (3) show, through simulations, that such a permutation procedure has an appropriate Type I error; (4) suggest that a rarefaction procedure could be used to obtain sample-size-corrected values of the RV coefficient; and (5) propose a nearest-neighbor procedure that could be used when studying the variation of modularity in geographic space. The approaches outlined here, readily extendable to non-morphometric datasets, allow study of the variation in the degree of integration between a priori defined modules. A Java application – that will allow performance of the proposed test using a software with graphical user interface – has also been developed and is available at the Morphometrics at Stony Brook Web page (http://life.bio.sunysb.edu/morph/).