918 resultados para Sequential auctions
Resumo:
This study investigates the motivation of English language lecturers in a Chinese university. Recent studies have shown that low morale and job dissatisfaction are significant problems identified in lecturers who teach English in universities in China. Given the importance of teaching English as a second language in China, this problem has potentially significant ramifications for the nation’s future. Low staff morale is likely to be associated with less effective teaching and poor student learning outcomes. Although the problem is acknowledged, there has been limited research to understand the underlying contributing factors. To address this, a sequential explanatory mixed methods approach was adopted and implemented in two phases at a large regional university in Northern China. The participants in the main study were 100 lecturers from two colleges at this university. All of the lecturers were responsible for teaching English as a foreign language (TEFL); 50 were teaching English majors and 50 were teaching university students whose majors were not English. The research was informed by a synthesis of self determination theory and theories of organisational culture. The study found: 1) in contrast to previously reported studies, lecturers in this institution were in general autonomously motivated in teaching. 2) However, their level of motivation was influenced by their personal experiences and varied sense of competence, relatedness and autonomy. 3) In particular, personal experiences and contextual factors such as the influence of Chinese culture, societal context, and organisational climate were significant in regulating lecturers’ motivation to teach. The findings are significant for leaders in higher education who need to implement policies that foster effective work environments. The study has also provided insights into the capacity of self determination theory to explain motivation in a Chinese culture.
Resumo:
Background: Injury is a leading cause of adolescent death. Risk-taking behaviours, including unsafe road behaviours, violence and alcohol use, are primary contributors. Recent research suggests adolescents look out for their friends and engage in protective behaviour to reduce others’ involvement in risk-taking. A positive school environment, and particularly students’ school connectedness, is also associated with reduced injury-risks. Aim: This study aimed to understand the role of school connectedness in adolescents’ intentions to protect and prevent their friends from involvement in alcohol use, fights, drink driving and unlicensed driving. Method: Surveys were completed by 540 13-14 year old students (49% male). Four sequential logistic regression analyses were conducted to determine whether school connectedness statistically predicted intentions to protect friends from injury-risk behaviours. Gender and ethnicity were entered at step 1, students’ own risk behaviour at step 2, and school connectedness scores at step 3 for all analyses. Results: School connectedness significantly predicted intentions to protect friends from all four injury-risk behaviours, after accounting for the variance attributable to sex, ethnicity and adolescents’ own involvement in injury-risks. Significance: School connectedness is negatively associated with adolescents’ own injury-risk behaviours. This research extends our knowledge of this critical protective factor, as it shows that students who are connected to school are also more likely to protect their friends from alcohol use, violence and unsafe road behaviours. School connectedness may therefore be an important factor to target in school-based prevention programs, both to reduce adolescents’ own injury-risk behaviour and to increase injury prevention among friends.
Resumo:
Advances in algorithms for approximate sampling from a multivariable target function have led to solutions to challenging statistical inference problems that would otherwise not be considered by the applied scientist. Such sampling algorithms are particularly relevant to Bayesian statistics, since the target function is the posterior distribution of the unobservables given the observables. In this thesis we develop, adapt and apply Bayesian algorithms, whilst addressing substantive applied problems in biology and medicine as well as other applications. For an increasing number of high-impact research problems, the primary models of interest are often sufficiently complex that the likelihood function is computationally intractable. Rather than discard these models in favour of inferior alternatives, a class of Bayesian "likelihoodfree" techniques (often termed approximate Bayesian computation (ABC)) has emerged in the last few years, which avoids direct likelihood computation through repeated sampling of data from the model and comparing observed and simulated summary statistics. In Part I of this thesis we utilise sequential Monte Carlo (SMC) methodology to develop new algorithms for ABC that are more efficient in terms of the number of model simulations required and are almost black-box since very little algorithmic tuning is required. In addition, we address the issue of deriving appropriate summary statistics to use within ABC via a goodness-of-fit statistic and indirect inference. Another important problem in statistics is the design of experiments. That is, how one should select the values of the controllable variables in order to achieve some design goal. The presences of parameter and/or model uncertainty are computational obstacles when designing experiments but can lead to inefficient designs if not accounted for correctly. The Bayesian framework accommodates such uncertainties in a coherent way. If the amount of uncertainty is substantial, it can be of interest to perform adaptive designs in order to accrue information to make better decisions about future design points. This is of particular interest if the data can be collected sequentially. In a sense, the current posterior distribution becomes the new prior distribution for the next design decision. Part II of this thesis creates new algorithms for Bayesian sequential design to accommodate parameter and model uncertainty using SMC. The algorithms are substantially faster than previous approaches allowing the simulation properties of various design utilities to be investigated in a more timely manner. Furthermore the approach offers convenient estimation of Bayesian utilities and other quantities that are particularly relevant in the presence of model uncertainty. Finally, Part III of this thesis tackles a substantive medical problem. A neurological disorder known as motor neuron disease (MND) progressively causes motor neurons to no longer have the ability to innervate the muscle fibres, causing the muscles to eventually waste away. When this occurs the motor unit effectively ‘dies’. There is no cure for MND, and fatality often results from a lack of muscle strength to breathe. The prognosis for many forms of MND (particularly amyotrophic lateral sclerosis (ALS)) is particularly poor, with patients usually only surviving a small number of years after the initial onset of disease. Measuring the progress of diseases of the motor units, such as ALS, is a challenge for clinical neurologists. Motor unit number estimation (MUNE) is an attempt to directly assess underlying motor unit loss rather than indirect techniques such as muscle strength assessment, which generally is unable to detect progressions due to the body’s natural attempts at compensation. Part III of this thesis builds upon a previous Bayesian technique, which develops a sophisticated statistical model that takes into account physiological information about motor unit activation and various sources of uncertainties. More specifically, we develop a more reliable MUNE method by applying marginalisation over latent variables in order to improve the performance of a previously developed reversible jump Markov chain Monte Carlo sampler. We make other subtle changes to the model and algorithm to improve the robustness of the approach.
Resumo:
This paper investigates the critical role of knowledge sharing (KS) in leveraging manufacturing activities, namely integrated supplier management (ISM) and new product development (NPD) to improve business performance (BP) within the context of Taiwanese electronic manufacturing companies. The research adopted a sequential mixed method research design, which provided both quantitative empirical evidence as well as qualitative insights, into the moderating effect of KS on the relationships between these two core manufacturing activities and BP. First, a questionnaire survey was administered, which resulted in a sample of 170 managerial and technical professionals providing their opinions on KS, NPD and ISM activities and the BP level within their respective companies. On the basis of the collected data, factor analysis was used to verify the measurement model, followed by correlation analysis to explore factor interrelationships, and finally moderated regression analyses to extract the moderating effects of KS on the relationships of NPD and ISM with BP. Following the quantitative study, six semi-structured interviews were conducted to provide qualitative in-depth insights into the value added from KS practices to the targeted manufacturing activities and the extent of its leveraging power. Results from quantitative statistical analysis indicated that KS, NPD and ISM all have a significant positive impact on BP. Specifically, IT infrastructure and open communication were identified as the two types of KS practices that could facilitate enriched supplier evaluation and selection, empower active employee involvement in the design process, and provide support for product simplification and the modular design process, thereby improving manufacturing performance and strengthening company competitiveness. The interviews authenticated many of the empirical findings, suggesting that in the contemporary manufacturing context KS has become an integral part of many ISM and NPD activities and when embedded properly can lead to an improvement in BP. The paper also highlights a number of useful implications for manufacturing companies seeking to leverage their BP through innovative and sustained KS practices.
Resumo:
This research examines the entrepreneurship phenomenon, and the question: Why are some venture attempts more successful than others? This question is not a new one. Prior research has answered this by describing those that engage in nascent entrepreneurship. Yet, this approach yielded little consensus and offers little comfort for those newly considering venture creation (Gartner, 1988). Rather, this research considers the process of venture creation, by focusing on the actions of nascent entrepreneurs. However, the venture creation process is complex (Liao, Welsch, & Tan, 2005), and multi-dimensional (Davidsson, 2004). The process can vary in the amount of action engaged by the entrepreneur; the temporal dynamics of how action is enacted (Lichtenstein, Carter, Dooley, and Gartner 2007); or the sequence in which actions are undertaken. And little is known about whether any, or all three, of these dimensions matter. Further, there exists scant general knowledge about how the venture creation process influences venture creation outcomes (Gartner & Shaver, 2011). Therefore, this research conducts a systematic study of what entrepreneurs do as they create a new venture. The primary goal is to develop general principles so that advice may be offered on how to ‘proceed’, rather than how to ‘be’. Three integrated empirical studies were conducted that separately focus on each of the interrelated dimensions. The basis for this was a randomly sampled, longitudinal panel, of nascent ventures. Upon recruitment these ventures were in the process of being created, but yet to be established as new businesses. The ventures were tracked one year latter to follow up on outcomes. Accordingly, this research makes the following original contributions to knowledge. First, the findings suggest that all three of the dimensions play an important role: action, dynamics, and sequence. This implies that future research should take a multi-dimensional view of the venture creation process. Failing to do so can only result in a limited understanding of a complex phenomenon. Second, action is the fundamental means through which venture creation is achieved. Simply put, more active venture creation efforts are more likely more successful. Further, action is the medium which allows resource endowments their effect upon venture outcomes. Third, the dynamics of how venture creation plays out over time is also influential. Here, a process with a high rate of action which increases in intensity will more likely achieve positive outcomes. Forth, sequence analysis, suggests that the order in which actions are taken will also drive outcomes. Although venture creation generally flows in sequence from discovery toward exploitation (Shane & Venkataraman, 2000; Eckhardt & Shane, 2003; Shane, 2003), processes that actually proceed in this way are less likely to be realized. Instead, processes which specifically intertwine discovery and exploitation action together in symbiosis more likely achieve better outcomes (Sarasvathy, 2001; Baker, Miner, & Eesley, 2003). Further, an optimal venture creation order exists somewhere between these sequential and symbiotic process archetypes. A process which starts out as symbiotic discovery and exploitation, but switches to focus exclusively on exploitation later on is most likely to achieve venture creation. These sequence findings are unique, and suggest future integration between opposing theories for order in venture creation.
Resumo:
The use of Wireless Sensor Networks (WSNs) for Structural Health Monitoring (SHM) has become a promising approach due to many advantages such as low cost, fast and flexible deployment. However, inherent technical issues such as data synchronization error and data loss have prevented these distinct systems from being extensively used. Recently, several SHM-oriented WSNs have been proposed and believed to be able to overcome a large number of technical uncertainties. Nevertheless, there is limited research verifying the applicability of those WSNs with respect to demanding SHM applications like modal analysis and damage identification. This paper first presents a brief review of the most inherent uncertainties of the SHM-oriented WSN platforms and then investigates their effects on outcomes and performance of the most robust Output-only Modal Analysis (OMA) techniques when employing merged data from multiple tests. The two OMA families selected for this investigation are Frequency Domain Decomposition (FDD) and Data-driven Stochastic Subspace Identification (SSI-data) due to the fact that they both have been widely applied in the past decade. Experimental accelerations collected by a wired sensory system on a large-scale laboratory bridge model are initially used as clean data before being contaminated by different data pollutants in sequential manner to simulate practical SHM-oriented WSN uncertainties. The results of this study show the robustness of FDD and the precautions needed for SSI-data family when dealing with SHM-WSN uncertainties. Finally, the use of the measurement channel projection for the time-domain OMA techniques and the preferred combination of the OMA techniques to cope with the SHM-WSN uncertainties is recommended.
Resumo:
A big challenge for classification on text is the noisy of text data. It makes classification quality low. Many classification process can be divided into two sequential steps scoring and threshold setting (thresholding). Therefore to deal with noisy data problem, it is important to describe positive feature effectively scoring and to set a suitable threshold. Most existing text classifiers do not concentrate on these two jobs. In this paper, we propose a novel text classifier with pattern-based scoring that describe positive feature effectively, followed by threshold setting. The thresholding is based on score of training set, make it is simple to implement in other scoring methods. Experiment shows that our pattern-based classifier is promising.
Resumo:
Damage assessment (damage detection, localization and quantification) in structures and appropriate retrofitting will enable the safe and efficient function of the structures. In this context, many Vibration Based Damage Identification Techniques (VBDIT) have emerged with potential for accurate damage assessment. VBDITs have achieved significant research interest in recent years, mainly due to their non-destructive nature and ability to assess inaccessible and invisible damage locations. Damage Index (DI) methods are also vibration based, but they are not based on the structural model. DI methods are fast and inexpensive compared to the model-based methods and have the ability to automate the damage detection process. DI method analyses the change in vibration response of the structure between two states so that the damage can be identified. Extensive research has been carried out to apply the DI method to assess damage in steel structures. Comparatively, there has been very little research interest in the use of DI methods to assess damage in Reinforced Concrete (RC) structures due to the complexity of simulating the predominant damage type, the flexural crack. Flexural cracks in RC beams distribute non- linearly and propagate along all directions. Secondary cracks extend more rapidly along the longitudinal and transverse directions of a RC structure than propagation of existing cracks in the depth direction due to stress distribution caused by the tensile reinforcement. Simplified damage simulation techniques (such as reductions in the modulus or section depth or use of rotational spring elements) that have been extensively used with research on steel structures, cannot be applied to simulate flexural cracks in RC elements. This highlights a big gap in knowledge and as a consequence VBDITs have not been successfully applied to damage assessment in RC structures. This research will address the above gap in knowledge and will develop and apply a modal strain energy based DI method to assess damage in RC flexural members. Firstly, this research evaluated different damage simulation techniques and recommended an appropriate technique to simulate the post cracking behaviour of RC structures. The ABAQUS finite element package was used throughout the study with properly validated material models. The damaged plasticity model was recommended as the method which can correctly simulate the post cracking behaviour of RC structures and was used in the rest of this study. Four different forms of Modal Strain Energy based Damage Indices (MSEDIs) were proposed to improve the damage assessment capability by minimising the numbers and intensities of false alarms. The developed MSEDIs were then used to automate the damage detection process by incorporating programmable algorithms. The developed algorithms have the ability to identify common issues associated with the vibration properties such as mode shifting and phase change. To minimise the effect of noise on the DI calculation process, this research proposed a sequential order of curve fitting technique. Finally, a statistical based damage assessment scheme was proposed to enhance the reliability of the damage assessment results. The proposed techniques were applied to locate damage in RC beams and slabs on girder bridge model to demonstrate their accuracy and efficiency. The outcomes of this research will make a significant contribution to the technical knowledge of VBDIT and will enhance the accuracy of damage assessment in RC structures. The application of the research findings to RC flexural members will enable their safe and efficient performance.
Resumo:
Background: Xanthine oxidase (XO) is a complex molybdeno-flavoprotein occurring with high activity in the milk fat globule membrane (MFGM) in all mammalian milk and is involved in the final stage of degradation of purine nucleotides. It catalyzes the sequential oxidation of hypoxanthine to xanthine and uric acid, accompanied by production of hydrogen peroxide and superoxide anion. Human saliva has been extensively described for its composition of proteins, electrolytes, cortisol, melatonin and some metabolites such as amino acids, but little is known about nucleotide metabolites. Method: Saliva was collected with swabs from babies; at full-term 1-4 days, 6-weeks, 6-months and 12-months. Unstimulated fasting (morning) saliva samples were collected directly from 77 adults. Breast milk was collected from 24 new mothers. Saliva was extracted from swabs and ultra-filtered. Nucleotide metabolites were analyzed by RP-HPLC with UV-photodiode array and ESI-MS/MS. XO activity was measured as peroxide production from hypoxanthine. Bacterial inhibition over time was assessed using CFU/mL or OD. Results: Median concentrations (μmol/L) of salivary nucleobases and nucleosides for neonates/6-weeks/6-months/12-months/adult respectively were: uracil 5.3/0.8/1.4/0.7/0.8, hypoxanthine 27/7.0/1.1/0.8/2.0, xanthine 19/7.0/2.0/2.0/2.0, adenosine 12/7.0/0.9/0.8/0.1, inosine 11/5.0/0.3/0.4/0.2, guanosine 7.0/6.0/0.5/0.4/0.1, uridine 12/0.8/0.3/0.9/0.4. Deoxynucleosides and dihydropyrimidines concentrations were essentially negligible. XO activity (Vmax:mean ± SD) in breast milk was 8.9 ± 6.2 μmol/min/L and endogenous peroxide was 27 ± 12 μmol/L; mixing breast milk with neonate saliva generated ~40 μmol/L peroxide,which inhibited Staphylococcus aureus. Conclusions: Salivary metabolites, particularly xanthine/hypoxanthine, are high in neonates, transitioning to low adult levels between 6-weeks to 6-months (p < 0.001). Peroxide occurs in breast milk and is boosted during suckling as an antibacterial system.
Resumo:
A theoretical framework for a construction management decision evaluation system for project selection by means of a literature review. The theory is developed by the examination of the major factors concerning the project selection decision from a deterministic viewpoint, where the decision-maker is assumed to possess 'perfect knowledge' of all the aspects involved. Four fundamental project characteristics are identified together with three meaningful outcome variables. The relationship within and between these variables are considered together with some possible solution techniques. The theory is next extended to time-related dynamic aspects of the problem leading to the implications of imperfect knowledge and a nondeterministic model. A solution technique is proposed in which Gottinger's sequential machines are utilised to model the decision process,
Resumo:
The space and time fractional Bloch–Torrey equation (ST-FBTE) has been used to study anomalous diffusion in the human brain. Numerical methods for solving ST-FBTE in three-dimensions are computationally demanding. In this paper, we propose a computationally effective fractional alternating direction method (FADM) to overcome this problem. We consider ST-FBTE on a finite domain where the time and space derivatives are replaced by the Caputo–Djrbashian and the sequential Riesz fractional derivatives, respectively. The stability and convergence properties of the FADM are discussed. Finally, some numerical results for ST-FBTE are given to confirm our theoretical findings.
Resumo:
In the decision-making of multi-area ATC (Available Transfer Capacity) in electricity market environment, the existing resources of transmission network should be optimally dispatched and coordinately employed on the premise that the secure system operation is maintained and risk associated is controllable. The non-sequential Monte Carlo simulation is used to determine the ATC probability density distribution of specified areas under the influence of several uncertainty factors, based on which, a coordinated probabilistic optimal decision-making model with the maximal risk benefit as its objective is developed for multi-area ATC. The NSGA-II is applied to calculate the ATC of each area, which considers the risk cost caused by relevant uncertainty factors and the synchronous coordination among areas. The essential characteristics of the developed model and the employed algorithm are illustrated by the example of IEEE 118-bus test system. Simulative result shows that, the risk of multi-area ATC decision-making is influenced by the uncertainties in power system operation and the relative importance degrees of different areas.
Resumo:
Background: In response to health workforce shortages policymakers have considered expanding the roles that a health professional may perform. A more traditional combination of health professional roles is that of a dispensing doctor (DD) who routinely prescribes and dispenses pharmaceuticals. A systematic review conducted on mainly overseas DDs’ practices found that DDs tended to prescribe more items per patients, less often generically, and showed poorer adherence to best practice. Convenience for patients was cited by both patients and DDs as the main reason for dispensing. In Australia, rural doctors are allowed to dispense Pharmaceutical Benefit Scheme (PBS) subsidised pharmaceutical benefits if there is no reasonable pharmacy coverage. Little was known about the practices of these Australian DDs. Objectives: To examine the PBS prescribing patterns of dispensing with matched non-dispensing doctors and identify factors that influence prescribing behaviour. Method: A sequential explanatory (QUAN-->qual) mixed methodology was utilised. Firstly, rurality-matched DDs’ and non-DDs’ PBS data for fiscal years 2005-7 were analysed against criteria distilled from a systematic review and stakeholder consultations. Secondly, structured interviews were conducted with a purposive sample of DDs to examine the quantitative findings. Key findings: DDs prescribed significantly fewer PBS prescriptions per patients but used Regulation 24 significantly more than non-DDs. Regulation 24 biased the prescribing data. DDs prescribed proportionally more penicillin type antibiotics, adrenergic inhalants and non-steroidal anti-inflammatories as compared to non-DDs. Reasons offered by DD-respondents highlighted that prescribing was influenced by an awareness of cost to the patients, peer pressure and confidential prescriber feedback provided on a regular basis. Implications: This innovative census study does not support international data that DDs are less judicious in their prescribing. There is some evidence that DDs might reduce health inequity between rural and urban Australian, and that the DD health model is valuable to patients in isolated communities.
Resumo:
Objective: To evaluate the prescribing practices of Australian dispensing doctors (DDs) and to explore their interpretations of the findings. Design, participants and setting: Sequential explanatory mixed methods. The quantitative phase comprised analysis of Pharmaceutical Benefits Scheme (PBS) claims data of DDs and non-DDs, 1 July 2005 30 June 2007. The qualitative phase involved semi-structured interviews with DDs in rural and remote general practice across Australian states, August 2009 February 2010. Main outcome measures: The number of PBS prescriptions per 1000 patients and use of Regulation 24 of the National Health (Pharmaceutical Benefits) Regulations 1960 (r. 24); DDs' interpretation of the findings. Results: 72 DDs' and 1080 non-DDs' PBS claims data were analysed quantitatively. DDs issued fewer prescriptions per 1000 patients (9452 v 15057; P = 0.003), even with a similar proportion of concessional patients and patients aged >65 years in their populations. DDs issued significantly more r. 24 prescriptions per 1000 prescriptions than non-DDs (314 v 67; P=0.008). Interviews with 22 DDs explained that the fewer prescriptions were due to perceived expectation from their peers regarding prescribing norms and the need to generate less administrative paperwork in small practices. Conclusions: Contrary to overseas findings, we found no evidence that Australian DDs overprescribed because of their additional dispensing role. MJA 2011; 195: 172-175
Resumo:
Iris based identity verification is highly reliable but it can also be subject to attacks. Pupil dilation or constriction stimulated by the application of drugs are examples of sample presentation security attacks which can lead to higher false rejection rates. Suspects on a watch list can potentially circumvent the iris based system using such methods. This paper investigates a new approach using multiple parts of the iris (instances) and multiple iris samples in a sequential decision fusion framework that can yield robust performance. Results are presented and compared with the standard full iris based approach for a number of iris degradations. An advantage of the proposed fusion scheme is that the trade-off between detection errors can be controlled by setting parameters such as the number of instances and the number of samples used in the system. The system can then be operated to match security threat levels. It is shown that for optimal values of these parameters, the fused system also has a lower total error rate.