212 resultados para Credit generalization
Resumo:
By using the Rasch model, much detailed diagnostic information is available to developers of survey and assessment instruments and to the researchers who use them. We outline an approach to the analysis of data obtained from the administration of survey instruments that can enable researchers to recognise and diagnose difficulties with those instruments and then to suggest remedial actions that can improve the measurement properties of the scales included in questionnaires. We illustrate the approach using examples drawn from recent research and demonstrate how the approach can be used to generate figures that make the results of Rasch analyses accessible to non-specialists.
Resumo:
In the 21st century, our global community is changing to increasingly value creativity and innovation as driving forces in our lives. This paper will investigate how educators need to move beyond the rhetoric to effective practices for teaching and fostering creativity. First, it will describe the nature of creativity at different levels, with a focus on personal and everyday creativity. It will then provide a brief snapshot of creativity in education through the lens of new policies and initiatives in Queensland, Australia. Next it will review two significant areas related to enriching and enhancing students’ creative engagement and production: 1) influential social and environmental factors; and 2) creative self-efficacy. Finally, this paper will propose that to effectively promote student creativity in schools, we need to not only emphasise policy, but also focus on establishing a shared discourse about the nature of creativity, and researching and implementing effective practices for supporting and fostering creativity. This paper has implications for educational policy, practice and teacher training that are applicable internationally.
Resumo:
There is little evidence that workshops alone have a lasting impact on the day-to-day practice of participants. The current paper examined a strategy to increase generalization and maintenance of skills in the natural environment using pseudo-patients and immediate performance feedback to reinforce skills acquisition. A random half of pharmacies (N=30) took part in workshop training aimed at optimizing consumers' use of nonprescription analgesic products. Pharmacies in the training group also received performance feedback on their adherence to the recommended protocol. Feedback occurred immediately after a pseudo-patient visit in which confederates posed as purchasers of analgesics, and combined positive and corrective elements. Trained pharmacists were significantly more accurate at identifying people who misused the medication (P<0.001). The trained pharmacists were more likely than controls to use open-ended questions (P<0.001), assess readiness to change problematic use (P <0.001), and to deliver a brief intervention that was tailored to the person's commitment to alter his/her usage (P <0.001). Participants responded to the feedback positively. Results were consistent with the hypothesis that when workshop is combined with on-site performance feedback, it enhances practitioners' adherence to protocols in the natural setting.
Resumo:
This paper explores models for enabling increased participation in experience based learning in legal professional practice. Legal placements as part of “for-credit” units offer students the opportunity to develop their professional skills in practice, reflect on their learning and job performance and take responsibility for their career development and planning. In short, work integrated learning (WIL) in law supports students in making the transition from university to practice. Despite its importance, WIL has traditionally taken place in practical legal training courses (after graduation) rather than during undergraduate law courses. Undergraduate WIL in Australian law schools has generally been limited to legal clinics which require intensive academic supervision, partnerships with community legal organisations and government funding. This paper will propose two models of WIL for undergraduate law which may overcome many of the challenges to engaging in WIL in law (which are consistent with those identified generally by the WIL Report). The first is a virtual law placement in which students use technology to complete a real world project in a virtual workplace under the guidance of a workplace supervisor. The second enables students to complete placements in private legal firms, government legal offices, or community legal centres under the supervision of a legal practitioner. The units complement each other by a) creating and enabling placement opportunities for students who may not otherwise have been able to participate in work placement by reason of family responsibilities, financial constraints, visa restrictions, distance etc; and b) enabling students to capitalise on existing work experience. This paper will report on the pilot offering of the units in 2008, the evaluation of the models and changes implemented in 2009. It will conclude that this multi-pronged approach can be successful in creating opportunities for, and overcoming barriers to participation in experiential learning in legal professional practice.
Resumo:
The selection criteria for contractor pre-qualification are characterized by the co-existence of both quantitative and qualitative data. The qualitative data is non-linear, uncertain and imprecise. An ideal decision support system for contractor pre-qualification should have the ability of handling both quantitative and qualitative data, and of mapping the complicated nonlinear relationship of the selection criteria, such that rational and consistent decisions can be made. In this research paper, an artificial neural network model was developed to assist public clients identifying suitable contractors for tendering. The pre-qualification criteria (variables) were identified for the model. One hundred and twelve real pre-qualification cases were collected from civil engineering projects in Hong Kong, and eighty-eight hypothetical pre-qualification cases were also generated according to the “If-then” rules used by professionals in the pre-qualification process. The results of the analysis totally comply with current practice (public developers in Hong Kong). Each pre-qualification case consisted of input ratings for candidate contractors’ attributes and their corresponding pre-qualification decisions. The training of the neural network model was accomplished by using the developed program, in which a conjugate gradient descent algorithm was incorporated for improving the learning performance of the network. Cross-validation was applied to estimate the generalization errors based on the “re-sampling” of training pairs. The case studies show that the artificial neural network model is suitable for mapping the complicated nonlinear relationship between contractors’ attributes and their corresponding pre-qualification (disqualification) decisions. The artificial neural network model can be concluded as an ideal alternative for performing the contractor pre-qualification task.
Resumo:
A collection of 60 case studies of the use of Creative Commons licensing in different sectors, including: music, social activism, film, visual arts, collecting, government, publishing and education.
Resumo:
We propose that a general analytic framework for cultural science can be constructed as a generalization of the generic micro meso macro framework proposed by Dopfer and Potts (2008). This paper outlines this argument along with some implications for the creative industries research agenda.
Resumo:
This paper considers the implications of the permanent/transitory decomposition of shocks for identification of structural models in the general case where the model might contain more than one permanent structural shock. It provides a simple and intuitive generalization of the influential work of Blanchard and Quah [1989. The dynamic effects of aggregate demand and supply disturbances. The American Economic Review 79, 655–673], and shows that structural equations with known permanent shocks cannot contain error correction terms, thereby freeing up the latter to be used as instruments in estimating their parameters. The approach is illustrated by a re-examination of the identification schemes used by Wickens and Motto [2001. Estimating shocks and impulse response functions. Journal of Applied Econometrics 16, 371–387], Shapiro and Watson [1988. Sources of business cycle fluctuations. NBER Macroeconomics Annual 3, 111–148], King et al. [1991. Stochastic trends and economic fluctuations. American Economic Review 81, 819–840], Gali [1992. How well does the ISLM model fit postwar US data? Quarterly Journal of Economics 107, 709–735; 1999. Technology, employment, and the business cycle: Do technology shocks explain aggregate fluctuations? American Economic Review 89, 249–271] and Fisher [2006. The dynamic effects of neutral and investment-specific technology shocks. Journal of Political Economy 114, 413–451].
Resumo:
Information fusion in biometrics has received considerable attention. The architecture proposed here is based on the sequential integration of multi-instance and multi-sample fusion schemes. This method is analytically shown to improve the performance and allow a controlled trade-off between false alarms and false rejects when the classifier decisions are statistically independent. Equations developed for detection error rates are experimentally evaluated by considering the proposed architecture for text dependent speaker verification using HMM based digit dependent speaker models. The tuning of parameters, n classifiers and m attempts/samples, is investigated and the resultant detection error trade-off performance is evaluated on individual digits. Results show that performance improvement can be achieved even for weaker classifiers (FRR-19.6%, FAR-16.7%). The architectures investigated apply to speaker verification from spoken digit strings such as credit card numbers in telephone or VOIP or internet based applications.
Resumo:
Market failures involving the sale of complex merchandise, such as residential property, financial products and credit, have principally been attributed to information asymmetries. Existing legislative and regulatory responses were developed having regard to consumer protection policies based on traditional economic theories that focus on the notion of the ‘rational consumer’. Governmental responses therefore seek to impose disclosure obligations on sellers of complex goods or products to ensure that consumers have sufficient information upon which to make a decision. Emergent research, based on behavioural economics, challenges traditional ideas and instead focuses on the actual behaviour of consumers. This approach suggests that consumers as a whole do not necessarily benefit from mandatory disclosure because some, if not most, consumers do not pay attention to the disclosed information before they make a decision to purchase. The need for consumer policies to take consumer characteristics and behaviour into account is being increasingly recognised by governments, and most recently in the policy framework suggested by the Australian Productivity Commission
Resumo:
Before the Global Financial Crisis many providers of finance had growth mandates and actively pursued development finance deals as a way of gaining higher returns on funds with regular capital turnover and re-investment possible. This was able to be achieved through high gearing and low presales in a strong market. As asset prices fell, loan covenants breached and memories of the 1990’s returned, banks rapidly adjusted their risk appetite via retraction of gearing and expansion of presale requirements. Early signs of loosening in bank credit policy are emerging, however parties seeking development finance are faced with a severely reduced number of institutions from which to source funding. The few institutions that are lending are filtering out only the best credit risks by way of constrictive credit conditions including: low loan to value ratios, the corresponding requirement to contribute high levels of equity, lack of support in non-prime locations and the requirement for only borrowers with well established track records. In this risk averse and capital constrained environment, the ability of developers to proceed with real estate developments is still being constrained by their inability to obtain project finance. This paper will examine the pre and post GFC development finance environment. It will identify the key lending criteria relevant to real estate development finance and will detail the related changes to credit policies over this period. The associated impact to real estate development projects will be presented, highlighting the significant constraint to supply that the inability to obtain finance poses.
Resumo:
The resource allocation and utilization discourse is dominated by debates about rights particularly individual property rights and ownership. This is due largely to the philosophic foundations provided by Hobbes and Locke and adopted by Bentham. In our community, though, resources come not merely with rights embedded but also obligations. The relevant laws and equitable principles which give shape to our shared rights and obligations with respect to resources take cognizance not merely of the title to the resource (the proprietary right) but the particular context in which the right is exercised. Moral philosophy regarding resource utilisation has from ancient times taken cognizance of obligations but with ascendance of modernity, the agenda of moral philosophy regarding resources, has been dominated, at least since John Locke, by a preoccupation with property rights; the ethical obligations associated with resource management have been largely ignored. The particular social context has also been ignored. Exploring this applied ethical terrain regarding resource utilisation, this thesis: (1) Revisits the justifications for modem property rights (and in that the exclusion of obligations); (2) Identifies major deficiencies in these justifications and reasons for this; (3) Traces the concept of stewardship as understood in classical Greek writing and in the New Testament, and considers its application in the Patristic period and by Medieval and reformist writers, before turning to investigate its influence on legal and equitable concepts through to the current day; 4) Discusses the nature of the stewardship obligation,maps it and offers a schematic for applying the Stewardship Paradigm to problems arising in daily life; and, (5) Discusses the way in which the Stewardship Paradigm may be applied by, and assists in resolving issues arising from within four dominant philosophic world views: (a) Rawls' social contract theory; (b) Utilitarianism as discussed by Peter Singer; (c) Christianity with particular focus on the theology of Douglas Hall; (d) Feminism particularly as expressed in the ethics of care of Carol Gilligan; and, offers some more general comments about stewardship in the context of an ethically plural community.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
Speaker verification is the process of verifying the identity of a person by analysing their speech. There are several important applications for automatic speaker verification (ASV) technology including suspect identification, tracking terrorists and detecting a person’s presence at a remote location in the surveillance domain, as well as person authentication for phone banking and credit card transactions in the private sector. Telephones and telephony networks provide a natural medium for these applications. The aim of this work is to improve the usefulness of ASV technology for practical applications in the presence of adverse conditions. In a telephony environment, background noise, handset mismatch, channel distortions, room acoustics and restrictions on the available testing and training data are common sources of errors for ASV systems. Two research themes were pursued to overcome these adverse conditions: Modelling mismatch and modelling uncertainty. To directly address the performance degradation incurred through mismatched conditions it was proposed to directly model this mismatch. Feature mapping was evaluated for combating handset mismatch and was extended through the use of a blind clustering algorithm to remove the need for accurate handset labels for the training data. Mismatch modelling was then generalised by explicitly modelling the session conditions as a constrained offset of the speaker model means. This session variability modelling approach enabled the modelling of arbitrary sources of mismatch, including handset type, and halved the error rates in many cases. Methods to model the uncertainty in speaker model estimates and verification scores were developed to address the difficulties of limited training and testing data. The Bayes factor was introduced to account for the uncertainty of the speaker model estimates in testing by applying Bayesian theory to the verification criterion, with improved performance in matched conditions. Modelling the uncertainty in the verification score itself met with significant success. Estimating a confidence interval for the "true" verification score enabled an order of magnitude reduction in the average quantity of speech required to make a confident verification decision based on a threshold. The confidence measures developed in this work may also have significant applications for forensic speaker verification tasks.