326 resultados para track survey
em Queensland University of Technology - ePrints Archive
Resumo:
In 2010, the third bi‐annual ADAPE Australasian benchmarking study was conducted to track educational development in Australia and New Zealand. Invitations to participate were sent to ADAPE’s membership of 820. Non‐members were also welcome to participate. In total, 92% of the 250 survey respondents were members of ADAPE. The 2010 Benchmarking Survey supports and extends results from 2005 and 2008. The 2010 survey was developed by taking into account participant feedback from 2008. With a view to provide the key information that participants want to know, the 2010 survey included more questions about salaries and other employment conditions; marketing and communications, especially new electronic technologies; and major gifts.
Resumo:
In 2012, the fourth bi-annual EducatePlus (formerly known as ADAPE Australasia) benchmarking study was conducted to track educational development in Australia and New Zealand.
Resumo:
In late 2014, the fifth biennial Educate Plus benchmarking study was conducted to track educational development in Australia and New Zealand. The 2014 survey built upon the four previous studies, which began in 2005. All participants were asked questions regarding institutional information, personal information, salary information and advancement office information. Following this, they could choose to complete at least one of the following sections according to their role/s: fundraising, marketing & communications, alumni & community relations, and admissions.
Resumo:
Research on the impact of Information Systems (IS) reported in both academic literature and popular press has reported confounding results. Some studies have reported encouraging results of IS, while others have reported nil or detrimental results. The contradictory results of these research studies can be partially attributed to the weaknesses in survey instruments. In an attempt to increase the validity of conclusions of IS assessment studies, survey instrument design should follow a rigorous and scientific procedure. This paper illustrates key validity and reliability issues in measuring Information Systems performance, using examples from a study designed to assess Enterprise Resource Planning systems success. The article emphasizes on the importance of the survey method and the theoretical considerations of item derivation, scale development and item evaluation. Examples are provided from the ERP assessment study to supplement the readers understanding of the theoretical concepts of survey design.