867 resultados para multiple choice tests
Resumo:
In cloud computing, resource allocation and scheduling of multiple composite web services is an important and challenging problem. This is especially so in a hybrid cloud where there may be some low-cost resources available from private clouds and some high-cost resources from public clouds. Meeting this challenge involves two classical computational problems: one is assigning resources to each of the tasks in the composite web services; the other is scheduling the allocated resources when each resource may be used by multiple tasks at different points of time. In addition, Quality-of-Service (QoS) issues, such as execution time and running costs, must be considered in the resource allocation and scheduling problem. Here we present a Cooperative Coevolutionary Genetic Algorithm (CCGA) to solve the deadline-constrained resource allocation and scheduling problem for multiple composite web services. Experimental results show that our CCGA is both efficient and scalable.
Resumo:
While a number of factors have been highlighted in the innovation adoption literature, little is known about whether different factors are related to innovation adoption in differently-sized firms. We used preliminary case studies of small, medium and large firms to ground our hypotheses, which were then tested using a survey of 94 firms. We found that external stakeholder pressure and non-financial readiness were related to innovation adoption in SMEs; but that for large firms, adoption was related to the opportunity to innovate. It may be that the difficulties of adopting innovations, including both the financial cost and the effort involved, are too great for SMEs to overcome unless there is either a compelling need (external pressure) or enough in-house capability (non-financial readiness). This suggests that SMEs are more likely to have innovation “pushed” onto them while large firms are more likely to “pull” innovations when they have the opportunity.
Resumo:
Multivariate volatility forecasts are an important input in many financial applications, in particular portfolio optimisation problems. Given the number of models available and the range of loss functions to discriminate between them, it is obvious that selecting the optimal forecasting model is challenging. The aim of this thesis is to thoroughly investigate how effective many commonly used statistical (MSE and QLIKE) and economic (portfolio variance and portfolio utility) loss functions are at discriminating between competing multivariate volatility forecasts. An analytical investigation of the loss functions is performed to determine whether they identify the correct forecast as the best forecast. This is followed by an extensive simulation study examines the ability of the loss functions to consistently rank forecasts, and their statistical power within tests of predictive ability. For the tests of predictive ability, the model confidence set (MCS) approach of Hansen, Lunde and Nason (2003, 2011) is employed. As well, an empirical study investigates whether simulation findings hold in a realistic setting. In light of these earlier studies, a major empirical study seeks to identify the set of superior multivariate volatility forecasting models from 43 models that use either daily squared returns or realised volatility to generate forecasts. This study also assesses how the choice of volatility proxy affects the ability of the statistical loss functions to discriminate between forecasts. Analysis of the loss functions shows that QLIKE, MSE and portfolio variance can discriminate between multivariate volatility forecasts, while portfolio utility cannot. An examination of the effective loss functions shows that they all can identify the correct forecast at a point in time, however, their ability to discriminate between competing forecasts does vary. That is, QLIKE is identified as the most effective loss function, followed by portfolio variance which is then followed by MSE. The major empirical analysis reports that the optimal set of multivariate volatility forecasting models includes forecasts generated from daily squared returns and realised volatility. Furthermore, it finds that the volatility proxy affects the statistical loss functions’ ability to discriminate between forecasts in tests of predictive ability. These findings deepen our understanding of how to choose between competing multivariate volatility forecasts.
Resumo:
The CDKN2A gene encodes p16 (CDKN2A), a cell-cycle inhibitor protein which prevents inappropriate cell cycling and, hence, proliferation. Germ-line mutations in CDKN2A predispose to the familial atypical multiple-mole melanoma (FAMMM) syndrome but also have been seen in rare families in which only 1 or 2 individuals are affected by cutaneous malignant melanoma (CMM). We therefore sequenced exons 1alpha and 2 of CDKN2A using lymphocyte DNA isolated from index cases from 67 families with cancers at multiple sites, where the patterns of cancer did not resemble those attributable to known genes such as hMLH1, hMLH2, BRCA1, BRCA2, TP53 or other cancer susceptibility genes. We found one mutation, a mis-sense mutation resulting in a methionine to isoleucine change at codon 53 (M531) of exon 2. The individual tested had developed 2 CMMs but had no dysplastic nevi and lacked a family history of dysplastic nevi or CMM. Other family members had been diagnosed with oral cancer (2 persons), bladder cancer (1 person) and possibly gall-bladder cancer. While this mutation has been reported in Australian and North American melanoma kindreds, we did not observe it in 618 chromosomes from Scottish and Canadian controls. Functional studies revealed that the CDKN2A variant carrying the M531 change was unable to bind effectively to CDK4, showing that this mutation is of pathological significance. Our results have confirmed that CDKN2A mutations are not limited to FAMMM kindreds but also demonstrate that multi-site cancer families without melanoma are very unlikely to contain CDKN2A mutations.
Resumo:
This thesis examines the ways in which citizens find out about socio-political issues. The project set out to discover how audience characteristics such as scepticism towards the media, gratifications sought, need for cognition and political interest influence information selection. While most previous information choice studies have focused on how individuals select from a narrow range of media types, this thesis considered a much wider sweep of the information landscape. This approach was taken to obtain an understanding of information choices in a more authentic context - in everyday life, people are not simply restricted to one or two news sources. Rather, they may obtain political information from a vast range of information sources, including media sources (e.g. radio, television, newspapers) and sources from beyond the media (eg. interpersonal sources, public speaking events, social networking websites). Thus, the study included both media and non-news media information sources. Data collection for the project consisted of a written, postal survey. The survey was administered to a probability sample in the greater Brisbane region, which is the third largest city in Australia. Data was collected during March and April 2008, approximately four months after the 2007 Australian Federal Election. Hence, the study was conducted in a non-election context. 585 usable surveys were obtained. In addition to measuring the attitudinal characteristics listed above, respondents were surveyed as to which information sources (eg. television shows, radio stations, websites and festivals) they usually use to find out about socio-political issues. Multiple linear regression analysis was conducted to explore patterns of influence between the audience characteristics and information consumption patterns. The results of this analysis indicated an apparent difference between the way citizens use news media sources and the way they use information sources from beyond the news media. In essence, it appears that non-news media information sources are used very deliberately to seek socio-political information, while media sources are used in a less purposeful way. If media use in a non-election context, such as that of the present study, is not primarily concerned with deliberate information seeking, media use must instead have other primary purposes, with political information acquisition as either a secondary driver, or a by-product of that primary purpose. It appears, then, that political information consumption in a media-saturated society is more about routine ‘practices’ than it is about ‘information seeking’. The suggestion that media use is no longer primarily concerned with information seeking, but rather, is simply a behaviour which occurs within the broader set of everyday practices reflects Couldry’s (2004) media as practice paradigm. These findings highlight the need for more authentic and holistic contexts for media research. It is insufficient to consider information choices in isolation, or even from a wider range of information sources, such as that incorporated in the present study. Future media research must take greater account of the broader social contexts and practices in which media-oriented behaviours occur. The findings also call into question the previously assumed centrality of trust to information selection decisions. Citizens regularly use media they do not trust to find out about politics. If people are willing to use information sources they do not trust for democratically important topics such as politics, it is important that citizens possess the media literacy skills to effectively understand and evaluate the information they are presented with. Without the application of such media literacy skills, a steady diet of ‘fast food’ media may result in uninformed or misinformed voting decisions, which have implications for the effectiveness of democratic processes. This research has emphasized the need for further holistic and authentically contextualised media use research, to better understand how citizens use information sources to find out about important topics such as politics.
Resumo:
It is recognised that individuals do not always respond honestly when completing psychological tests. One of the foremost issues for research in this area is the inability to detect individuals attempting to fake. While a number of strategies have been identified in faking, a commonality of these strategies is the latent role of long term memory. Seven studies were conducted in order to examine whether it is possible to detect the activation of faking related cognitions using a lexical decision task. Study 1 found that engagement with experiential processing styles predicted the ability to fake successfully, confirming the role of associative processing styles in faking. After identifying appropriate stimuli for the lexical decision task (Studies 2A and 2B), Studies 3 to 5 examined whether a cognitive state of faking could be primed and subsequently identified, using a lexical decision task. Throughout the course of these studies, the experimental methodology was increasingly refined in an attempt to successfully identify the relevant priming mechanisms. The results were consistent and robust throughout the three priming studies: faking good on a personality test primed positive faking related words in the lexical decision tasks. Faking bad, however, did not result in reliable priming of negative faking related cognitions. To more completely address potential issues with the stimuli and the possible role of affective priming, two additional studies were conducted. Studies 6A and 6B revealed that negative faking related words were more arousing than positive faking related words, and that positive faking related words were more abstract than negative faking related words and neutral words. Study 7 examined whether the priming effects evident in the lexical decision tasks occurred as a result of an unintentional mood induction while faking the psychological tests. Results were equivocal in this regard. This program of research aligned the fields of psychological assessment and cognition to inform the preliminary development and validation of a new tool to detect faking. Consequently, an implicit technique to identify attempts to fake good on a psychological test has been identified, using long established and robust cognitive theories in a novel and innovative way. This approach represents a new paradigm for the detection of individuals responding strategically to psychological testing. With continuing development and validation, this technique may have immense utility in the field of psychological assessment.
Resumo:
This study investigated how the interpretation of mathematical problems by Year 7 students impacted on their ability to demonstrate what they can do in NAPLAN numeracy testing. In the study, mathematics is viewed as a culturally and socially determined system of signs and signifiers that establish the meaning, origins and importance of mathematics. The study hypothesises that students are unable to succeed in NAPLAN numeracy tests because they cannot interpret the questions, even though they may be able to perform the necessary calculations. To investigate this, the study applied contemporary theories of literacy to the context of mathematical problem solving. A case study design with multiple methods was used. The study used a correlation design to explore the connections between NAPLAN literacy and numeracy outcomes of 198 Year 7 students in a Queensland school. Additionally, qualitative methods provided a rich description of the effect of the various forms of NAPLAN numeracy questions on the success of ten Year 7 students in the same school. The study argues that there is a quantitative link between reading and numeracy. It illustrates that interpretation (literacy) errors are the most common error type in the selected NAPLAN questions, made by students of all abilities. In contrast, conceptual (mathematical) errors are less frequent amongst more capable students. This has important implications in preparing students for NAPLAN numeracy tests. The study concluded by recommending that increased focus on the literacies of mathematics would be effective in improving NAPLAN results.
Resumo:
Cold-formed steel stud walls are a major component of Light Steel Framing (LSF) building systems used in commercial, industrial and residential buildings. In the conventional LSF stud wall systems, thin steel studs are protected from fire by placing one or two layers of plasterboard on both sides with or without cavity insulation. However, there is very limited data about the structural and thermal performance of stud wall systems while past research showed contradicting results, for example, about the benefits of cavity insulation. This research was therefore conducted to improve the knowledge and understanding of the structural and thermal performance of cold-formed steel stud wall systems (both load bearing and non-load bearing) under fire conditions and to develop new improved stud wall systems including reliable and simple methods to predict their fire resistance rating. Full scale fire tests of cold-formed steel stud wall systems formed the basis of this research. This research proposed an innovative LSF stud wall system in which a composite panel made of two plasterboards with insulation between them was used to improve the fire rating. Hence fire tests included both conventional steel stud walls with and without the use of cavity insulation and the new composite panel system. A propane fired gas furnace was specially designed and constructed first. The furnace was designed to deliver heat in accordance with the standard time temperature curve as proposed by AS 1530.4 (SA, 2005). A compression loading frame capable of loading the individual studs of a full scale steel stud wall system was also designed and built for the load-bearing tests. Fire tests included comprehensive time-temperature measurements across the thickness and along the length of all the specimens using K type thermocouples. They also included the measurements of load-deformation characteristics of stud walls until failure. The first phase of fire tests included 15 small scale fire tests of gypsum plasterboards, and composite panels using different types of insulating material of varying thickness and density. Fire performance of single and multiple layers of gypsum plasterboards was assessed including the effect of interfaces between adjacent plasterboards on the thermal performance. Effects of insulations such as glass fibre, rock fibre and cellulose fibre were also determined while the tests provided important data relating to the temperature at which the fall off of external plasterboards occurred. In the second phase, nine small scale non-load bearing wall specimens were tested to investigate the thermal performance of conventional and innovative steel stud wall systems. Effects of single and multiple layers of plasterboards with and without vertical joints were investigated. The new composite panels were seen to offer greater thermal protection to the studs in comparison to the conventional panels. In the third phase of fire tests, nine full scale load bearing wall specimens were tested to study the thermal and structural performance of the load bearing wall assemblies. A full scale test was also conducted at ambient temperature. These tests showed that the use of cavity insulation led to inferior fire performance of walls, and provided good explanations and supporting research data to overcome the incorrect industry assumptions about cavity insulation. They demonstrated that the use of insulation externally in a composite panel enhanced the thermal and structural performance of stud walls and increased their fire resistance rating significantly. Hence this research recommends the use of the new composite panel system for cold-formed LSF walls. This research also included steady state tensile tests at ambient and elevated temperatures to address the lack of reliable mechanical properties for high grade cold-formed steels at elevated temperatures. Suitable predictive equations were developed for calculating the yield strength and elastic modulus at elevated temperatures. In summary, this research has developed comprehensive experimental thermal and structural performance data for both the conventional and the proposed non-load bearing and load bearing stud wall systems under fire conditions. Idealized hot flange temperature profiles have been developed for non-insulated, cavity insulated and externally insulated load bearing wall models along with suitable equations for predicting their failure times. A graphical method has also been proposed to predict the failure times (fire rating) of non-load bearing and load bearing walls under different load ratios. The results from this research are useful to both fire researchers and engineers working in this field. Most importantly, this research has significantly improved the knowledge and understanding of cold-formed LSF walls under fire conditions, and developed an innovative LSF wall system with increased fire rating. It has clearly demonstrated the detrimental effects of using cavity insulation, and has paved the way for Australian building industries to develop new wall panels with increased fire rating for commercial applications worldwide.