947 resultados para Empirical Bayes Methods
Resumo:
This article examines whether investors are able to generate abnormal risk-adjusted returns in the Australian market based on media-specific firm reputational factors under market uncertainty between 2004 and 2012. The findings suggest that after controlling for crisis-centric time periods and market risk factors, contrarian trading strategies produce abnormal returns for poor corporate reputation firms but not for their good corporate reputation counterparts. Corporate reputation may be a driver of performance for poorly performing Australian firms and could be considered a stimulus for trading activity due to its explanatory capabilities.
Resumo:
The trans-activator of transcription (TAT) peptide is regarded as the “gold standard” for cell-penetrating peptides, capable of traversing a mammalian membrane passively into the cytosolic space. This characteristic has been exploited through conjugation of TAT for applications such as drug delivery. However, the process by which TAT achieves membrane penetration remains ambiguous and unresolved. Mechanistic details of TAT peptide action are revealed herein by using three complementary methods: quartz crystal microbalance with dissipation (QCM-D), scanning electrochemical microscopy (SECM) and atomic force microscopy (AFM). When combined, these three scales of measurement define that the membrane uptake of the TAT peptide is by trans-membrane insertion using a “worm-hole” pore that leads to ion permeability across the membrane layer. AFM data provided nanometre-scale visualisation of TAT punctuation using a mammalian-mimetic membrane bilayer. The TAT peptide does not show the same specificity towards a bacterial mimetic membrane and QCM-D and SECM showed that the TAT peptide demonstrates a disruptive action towards these membranes. This investigation supports the energy-independent uptake of the cationic TAT peptide and provides empirical data that clarify the mechanism by which the TAT peptide achieves its membrane activity. The novel use of these three biophysical techniques provides valuable insight into the mechanism for TAT peptide translocation, which is essential for improvements in the cellular delivery of TAT-conjugated cargoes including therapeutic agents required to target specific intracellular locations.
Resumo:
In this study we use region-level panel data on rice production in Vietnam to investigate total factor productivity (TFP) growth in the period since reunification in 1975. Two significant reforms were introduced during this period, one in 1981 allowing farmers to keep part of their produce, and another in 1987 providing improved land tenure. We measure TFP growth using two modified forms of the standard Malmquist data envelopment analysis (DEA) method, which we have named the Three-year-window (TYW) and the Full Cumulative (FC) methods. We have developed these methods to deal with degrees of freedom limitations. Our empirical results indicate strong average TFP growth of between 3.3 and 3.5 per cent per annum, with the fastest growth observed in the period following the first reform. Our results support the assertion that incentive related issues have played a large role in the decline and subsequent resurgence of Vietnamese agriculture.
Resumo:
This article analyses the effects of NGO microfinance programmes on household welfare in Vietnam. Data on 470 households across 25 villages were collected using a quasi-experimental survey approach to overcome any self-selection bias. The sample was designed so that member households of microfinance programmes were compared with non-member households with similar characteristics. The analysis shows no significant effects of participation in NGO microfinance on household welfare, proxied by income and consumption per adult equivalent.
Resumo:
It is commonly perceived that variables ‘measuring’ different dimensions of teaching (construed as instructional attributes) used in student evaluation of teaching (SET) questionnaires are so highly correlated that they pose a serious multicollinearity problem for quantitative analysis including regression analysis. Using nearly 12000 individual student responses to SET questionnaires and ten key dimensions of teaching and 25 courses at various undergraduate and postgraduate levels for multiple years at a large Australian university, this paper investigates whether this is indeed the case and if so under what circumstances. This paper tests this proposition first by examining variance inflation factors (VIFs), across courses, levels and over time using individual responses; and secondly by using class averages. In the first instance, the paper finds no sustainable evidence of multicollinearity. While, there were one or two isolated cases of VIFs marginally exceeding the conservative threshold of 5, in no cases did the VIFs for any of the instructional attributes come anywhere close to the high threshold value of 10. In the second instance, however, the paper finds that the attributes are highly correlated as all the VIFs exceed 10. These findings have two implications: (a) given the ordinal nature of the data ordered probit analysis using individual student responses can be employed to quantify the impact of instructional attributes on TEVAL score; (b) Data based on class averages cannot be used for probit analysis. An illustrative exercise using level 2 undergraduate courses data suggests higher TEVAL scores depend first and foremost on improving explanation, presentation, and organization of lecture materials.
Resumo:
Alignment-free methods, in which shared properties of sub-sequences (e.g. identity or match length) are extracted and used to compute a distance matrix, have recently been explored for phylogenetic inference. However, the scalability and robustness of these methods to key evolutionary processes remain to be investigated. Here, using simulated sequence sets of various sizes in both nucleotides and amino acids, we systematically assess the accuracy of phylogenetic inference using an alignment-free approach, based on D2 statistics, under different evolutionary scenarios. We find that compared to a multiple sequence alignment approach, D2 methods are more robust against among-site rate heterogeneity, compositional biases, genetic rearrangements and insertions/deletions, but are more sensitive to recent sequence divergence and sequence truncation. Across diverse empirical datasets, the alignment-free methods perform well for sequences sharing low divergence, at greater computation speed. Our findings provide strong evidence for the scalability and the potential use of alignment-free methods in large-scale phylogenomics.
Resumo:
Vehicle speed is an important attribute for analysing the utility of a transport mode. The speed relationship between multiple modes of transport is of interest to traffic planners and operators. This paper quantifies the relationship between bus speed and average car speed by integrating Bluetooth data and Transit Signal Priority data from the urban network in Brisbane, Australia. The method proposed in this paper is the first of its kind to relate bus speed and average car speed by integrating multi-source traffic data in a corridor-based method. Three transferable regression models relating not-in-service bus, in-service bus during peak periods, and in-service bus during off-peak periods with average car speed are proposed. The models are cross-validated and the interrelationships are significant.
Resumo:
This thesis develops a novel approach to robot control that learns to account for a robot's dynamic complexities while executing various control tasks using inspiration from biological sensorimotor control and machine learning. A robot that can learn its own control system can account for complex situations and adapt to changes in control conditions to maximise its performance and reliability in the real world. This research has developed two novel learning methods, with the aim of solving issues with learning control of non-rigid robots that incorporate additional dynamic complexities. The new learning control system was evaluated on a real three degree-of-freedom elastic joint robot arm with a number of experiments: initially validating the learning method and testing its ability to generalise to new tasks, then evaluating the system during a learning control task requiring continuous online model adaptation.
Not just what they want, but why they want it: Traditional market research to deep customer insights
Resumo:
Purpose This paper explores advantages and disadvantages of both traditional market research and deep customer insight methods in order to lay the platform for revealing how a relationship between these two domains could be optimised during firm-based innovation. Design/methodology/approach The paper reports on an empirical research study conducted with thirteen Australian based firms engaged in a design-led approach to innovation. Firms were facilitated through a design-led approach where the process of gathering deep customer insights was isolated and investigated further in comparison to traditional market research methods. Findings Results show that deep customer insight methods are able to provide fresh, non-obvious ways of understanding customer needs, problems and behaviours that can become the foundation of new business opportunities. Findings concluded that deep customer insights methods provide the critical layer to understand why customers do and don’t engage with businesses. Revealing why was not accessible in traditional market research methods. Research limitations/implications The theoretical outcome of this study is a complementary methods matrix, providing guidance on appropriate implementation of research methods in accordance with a project’s timeline to optimise the complementation of traditional market research methods with design-led customer engagement methods. Practical implications Deep customer insight methods provide fresh, non-obvious ways of understanding customer needs, problems and behaviours that can become the foundation of new business opportunities. It is hoped that those in a position of data collection are encouraged to experiment and use deep customer insight methods to connect with their customers on a meaningful level and translate these insights into value. Originality/value This paper provides original value to a new understanding how design techniques can be applied to compliment and strengthen existing market research strategies. This is crucial in an era where business competition hinges on a subtle and often intimate understanding of customer needs and behaviours.
Resumo:
Integration of biometrics is considered as an attractive solution for the issues associated with password based human authentication as well as for secure storage and release of cryptographic keys which is one of the critical issues associated with modern cryptography. However, the widespread popularity of bio-cryptographic solutions are somewhat restricted by the fuzziness associated with biometric measurements. Therefore, error control mechanisms must be adopted to make sure that fuzziness of biometric inputs can be sufficiently countered. In this paper, we have outlined such existing techniques used in bio-cryptography while explaining how they are deployed in different types of solutions. Finally, we have elaborated on the important facts to be considered when choosing appropriate error correction mechanisms for a particular biometric based solution.
Resumo:
We aim to design strategies for sequential decision making that adjust to the difficulty of the learning problem. We study this question both in the setting of prediction with expert advice, and for more general combinatorial decision tasks. We are not satisfied with just guaranteeing minimax regret rates, but we want our algorithms to perform significantly better on easy data. Two popular ways to formalize such adaptivity are second-order regret bounds and quantile bounds. The underlying notions of 'easy data', which may be paraphrased as "the learning problem has small variance" and "multiple decisions are useful", are synergetic. But even though there are sophisticated algorithms that exploit one of the two, no existing algorithm is able to adapt to both. In this paper we outline a new method for obtaining such adaptive algorithms, based on a potential function that aggregates a range of learning rates (which are essential tuning parameters). By choosing the right prior we construct efficient algorithms and show that they reap both benefits by proving the first bounds that are both second-order and incorporate quantiles.
Resumo:
Introduction. Social media is becoming a vital source of information in disaster or emergency situations. While a growing number of studies have explored the use of social media in natural disasters by emergency staff, military personnel, medial and other professionals, very few studies have investigated the use of social media by members of the public. The purpose of this paper is to explore citizens’ information experiences in social media during times of natural disaster. Method. A qualitative research approach was applied. Data was collected via in-depth interviews. Twenty-five people who used social media during a natural disaster in Australia participated in the study. Analysis. Audio recordings of interviews and interview transcripts provided the empirical material for data analysis. Data was analysed using structural and focussed coding methods. Results. Eight key themes depicting various aspects of participants’ information experience during a natural disaster were uncovered by the study: connected; wellbeing; coping; help; brokerage; journalism; supplementary and characteristics. Conclusion. This study contributes insights into social media’s potential for developing community disaster resilience and promotes discussion about the value of civic participation in social media when such circumstances occur. These findings also contribute to our understanding of information experiences as a new informational research object.
Resumo:
Internationally there is interest in developing the research skills of pre-service teachers as a means of ongoing professional renewal with a distinct need for systematic and longitudinal investigation of student learning. The current study takes a unique perspective by exploring the research learning journey of pre-service teachers participating in a transnational degree programme. Using a case-study design that includes both a self-reported and direct measure of research knowledge, the results indicate a progression in learning, as well as evidence that this research knowledge is continued or maintained when the pre-service teachers return to their home university. The findings of this study have implications for both pre-service teacher research training and transnational programmes.