947 resultados para Implicit ODE
Resumo:
It is recognised that individuals do not always respond honestly when completing psychological tests. One of the foremost issues for research in this area is the inability to detect individuals attempting to fake. While a number of strategies have been identified in faking, a commonality of these strategies is the latent role of long term memory. Seven studies were conducted in order to examine whether it is possible to detect the activation of faking related cognitions using a lexical decision task. Study 1 found that engagement with experiential processing styles predicted the ability to fake successfully, confirming the role of associative processing styles in faking. After identifying appropriate stimuli for the lexical decision task (Studies 2A and 2B), Studies 3 to 5 examined whether a cognitive state of faking could be primed and subsequently identified, using a lexical decision task. Throughout the course of these studies, the experimental methodology was increasingly refined in an attempt to successfully identify the relevant priming mechanisms. The results were consistent and robust throughout the three priming studies: faking good on a personality test primed positive faking related words in the lexical decision tasks. Faking bad, however, did not result in reliable priming of negative faking related cognitions. To more completely address potential issues with the stimuli and the possible role of affective priming, two additional studies were conducted. Studies 6A and 6B revealed that negative faking related words were more arousing than positive faking related words, and that positive faking related words were more abstract than negative faking related words and neutral words. Study 7 examined whether the priming effects evident in the lexical decision tasks occurred as a result of an unintentional mood induction while faking the psychological tests. Results were equivocal in this regard. This program of research aligned the fields of psychological assessment and cognition to inform the preliminary development and validation of a new tool to detect faking. Consequently, an implicit technique to identify attempts to fake good on a psychological test has been identified, using long established and robust cognitive theories in a novel and innovative way. This approach represents a new paradigm for the detection of individuals responding strategically to psychological testing. With continuing development and validation, this technique may have immense utility in the field of psychological assessment.
Resumo:
Recently, because of the new developments in sustainable engineering and renewable energy, which are usually governed by a series of fractional partial differential equations (FPDEs), the numerical modelling and simulation for fractional calculus are attracting more and more attention from researchers. The current dominant numerical method for modeling FPDE is Finite Difference Method (FDM), which is based on a pre-defined grid leading to inherited issues or shortcomings including difficulty in simulation of problems with the complex problem domain and in using irregularly distributed nodes. Because of its distinguished advantages, the meshless method has good potential in simulation of FPDEs. This paper aims to develop an implicit meshless collocation technique for FPDE. The discrete system of FPDEs is obtained by using the meshless shape functions and the meshless collocation formulation. The stability and convergence of this meshless approach are investigated theoretically and numerically. The numerical examples with regular and irregular nodal distributions are used to validate and investigate accuracy and efficiency of the newly developed meshless formulation. It is concluded that the present meshless formulation is very effective for the modeling and simulation of fractional partial differential equations.
Resumo:
Natural convection flow in a two-dimensional fluid saturated porous enclosure with localized heating from below, symmetrical cooling from the sides and the top and rest of the bottom walls are insulated, has been investigated numerically. Darcy’s law for porous media along with the energy equation based on the 1st law of thermodynamics has been considered. Implicit finite volume method with TDMA solver is used to solve the governing equations. Localized heating is simulated by a centrally located isothermal heat source on the bottom wall, and four different values of the dimensionless heat source length, 1/5, 2/5, 3/5 and 4/5 are considered. The effect of heat source length and the Rayleigh number on streamlines and isotherms are presented, as well as the variation of the local rate of heat transfer in terms of the local Nusselt number from the heated wall. Finally, the average Nusselt number at the heated part of the bottom wall has been shown against Rayleigh number for the non-dimensional heat source length.
Resumo:
Natural convection flow from an isothermal vertical plate with uniform heat source embedded in a stratified medium has been discussed in this paper. The resulting momentum and energy equations of boundary layer approximation are made non-similar by introducing the usual non-similarity transformations. Numerical solutions of these equations are obtained by an implicit finite difference method for a wide range of the stratification parameter, X. The solutions are also obtained for different values of pertinent parameters, namely, the Prandtl number, Pr and the heat generation or absorption parameter, λ and are expressed in terms of the local skin-friction and local heat transfer, which are shown in the graphical form. Effect of heat generation or absorption on the streamlines and isotherms are also shown graphically for different values of λ.
Resumo:
Statement: Jams, Jelly Beans and the Fruits of Passion Let us search, instead, for an epistemology of practice implicit in the artistic, intuitive processes which some practitioners do bring to situations of uncertainty, instability, uniqueness, and value conflict. (Schön 1983, p40) Game On was born out of the idea of creative community; finding, networking, supporting and inspiring the people behind the face of an industry, those in the mist of the machine and those intending to join. We understood this moment to be a pivotal opportunity to nurture a new emerging form of game making, in an era of change, where the old industry models were proving to be unsustainable. As soon as we started putting people into a room under pressure, to make something in 48hrs, a whole pile of evolutionary creative responses emerged. People refashioned their craft in a moment of intense creativity that demanded different ways of working, an adaptive approach to the craft of making games – small – fast – indie. An event like the 48hrs forces participants’ attention onto the process as much as the outcome. As one game industry professional taking part in a challenge for the first time observed: there are three paths in the genesis from idea to finished work: the path that focuses on mechanics; the path that focuses on team structure and roles, and the path that focuses on the idea, the spirit – and the more successful teams put the spirit of the work first and foremost. The spirit drives the adaptation, it becomes improvisation. As Schön says: “Improvisation consists on varying, combining and recombining a set of figures within the schema which bounds and gives coherence to the performance.” (1983, p55). This improvisational approach is all about those making the games: the people and the principles of their creative process. This documentation evidences the intensity of their passion, determination and the shit that they are prepared to put themselves through to achieve their goal – to win a cup full of jellybeans and make a working game in 48hrs. 48hr is a project where, on all levels, analogue meets digital. This concept was further explored through the documentation process. All of these pictures were taken with a 1945 Leica III camera. The use of this classic, film-based camera, gives the images a granularity and depth, this older slower technology exposes the very human moments of digital creativity. ____________________________ Schön, D. A. 1983, The Reflective Practitioner: How Professionals Think in Action, Basic Books, New York
Resumo:
In the long term, with development of skill, knowledge, exposure and confidence within the engineering profession, rigorous analysis techniques have the potential to become a reliable and far more comprehensive method for design and verification of the structural adequacy of OPS, write Nimal J Perera, David P Thambiratnam and Brian Clark. This paper explores the potential to enhance operator safety of self-propelled mechanical plant subjected to roll over and impact of falling objects using the non-linear and dynamic response simulation capabilities of analytical processes to supplement quasi-static testing methods prescribed in International and Australian Codes of Practice for bolt on Operator Protection Systems (OPS) that are post fitted. The paper is based on research work carried out by the authors at the Queensland University of Technology (QUT) over a period of three years by instrumentation of prototype tests, scale model tests in the laboratory and rigorous analysis using validated Finite Element (FE) Models. The FE codes used were ABAQUS for implicit analysis and LSDYNA for explicit analysis. The rigorous analysis and dynamic simulation technique described in the paper can be used to investigate the structural response due to accident scenarios such as multiple roll over, impact of multiple objects and combinations of such events and thereby enhance the safety and performance of Roll Over and Falling Object Protection Systems (ROPS and FOPS). The analytical techniques are based on sound engineering principles and well established practice for investigation of dynamic impact on all self propelled vehicles. They are used for many other similar applications where experimental techniques are not feasible.
Resumo:
With the growing number of XML documents on theWeb it becomes essential to effectively organise these XML documents in order to retrieve useful information from them. A possible solution is to apply clustering on the XML documents to discover knowledge that promotes effective data management, information retrieval and query processing. However, many issues arise in discovering knowledge from these types of semi-structured documents due to their heterogeneity and structural irregularity. Most of the existing research on clustering techniques focuses only on one feature of the XML documents, this being either their structure or their content due to scalability and complexity problems. The knowledge gained in the form of clusters based on the structure or the content is not suitable for reallife datasets. It therefore becomes essential to include both the structure and content of XML documents in order to improve the accuracy and meaning of the clustering solution. However, the inclusion of both these kinds of information in the clustering process results in a huge overhead for the underlying clustering algorithm because of the high dimensionality of the data. The overall objective of this thesis is to address these issues by: (1) proposing methods to utilise frequent pattern mining techniques to reduce the dimension; (2) developing models to effectively combine the structure and content of XML documents; and (3) utilising the proposed models in clustering. This research first determines the structural similarity in the form of frequent subtrees and then uses these frequent subtrees to represent the constrained content of the XML documents in order to determine the content similarity. A clustering framework with two types of models, implicit and explicit, is developed. The implicit model uses a Vector Space Model (VSM) to combine the structure and the content information. The explicit model uses a higher order model, namely a 3- order Tensor Space Model (TSM), to explicitly combine the structure and the content information. This thesis also proposes a novel incremental technique to decompose largesized tensor models to utilise the decomposed solution for clustering the XML documents. The proposed framework and its components were extensively evaluated on several real-life datasets exhibiting extreme characteristics to understand the usefulness of the proposed framework in real-life situations. Additionally, this research evaluates the outcome of the clustering process on the collection selection problem in the information retrieval on the Wikipedia dataset. The experimental results demonstrate that the proposed frequent pattern mining and clustering methods outperform the related state-of-the-art approaches. In particular, the proposed framework of utilising frequent structures for constraining the content shows an improvement in accuracy over content-only and structure-only clustering results. The scalability evaluation experiments conducted on large scaled datasets clearly show the strengths of the proposed methods over state-of-the-art methods. In particular, this thesis work contributes to effectively combining the structure and the content of XML documents for clustering, in order to improve the accuracy of the clustering solution. In addition, it also contributes by addressing the research gaps in frequent pattern mining to generate efficient and concise frequent subtrees with various node relationships that could be used in clustering.
Resumo:
Magnetohydrodynamic (MHD) natural convection laminar flow from an iso-thermal horizontal circular cylinder immersed in a fluid with viscosity proportional to a linear function of temperature will be discussed with numerical simulations. The governing boundary layer equations are transformed into a non-dimensional form and the resulting nonlinear system of partial differential equa-tions are reduced to convenient form, which are solved numerically by two very efficient methods, namely, (i) Implicit finite difference method together with Keller box scheme and (ii) Direct numerical scheme. Numerical results are presented by velocity and temperature distributions of the fluid as well as heat transfer characteristics, namely the shearing stress and the local heat transfer rate in terms of the local skin-friction coefficient and the local Nusselt number for a wide range of magnetohydrodynamic parameter, viscosity-variation parameter and viscous dissipation parameter. MHD flow in this geometry with temperature dependent viscosity is absent in the literature. The results obtained from the numerical simulations have been veri-fied by two methodologies.
Resumo:
Sourcing appropriate funding for the provision of new urban infrastructure has been a policy dilemma for governments around the world for decades. This is particularly relevant in high growth areas where new services are required to support swelling populations. The Australian infrastructure funding policy dilemmas are reflective of similar matters in many countries, particularly the United States of America, where infrastructure cost recovery policies have been in place since the 1970’s. There is an extensive body of both theoretical and empirical literature from these countries that discusses the passing on (to home buyers) of these infrastructure charges, and the corresponding impact on housing prices. The theoretical evidence is consistent in its findings that infrastructure charges are passed on to home buyers by way of higher house prices. The empirical evidence is also consistent in its findings, with “overshifting” of these charges evident in all models since the 1980’s, i.e. $1 infrastructure charge results in greater than $1 increase in house prices. However, despite over a dozen separate studies over two decades in the US on this topic, no empirical works have been carried out in Australia to test if similar shifting or overshifting occurs here. The purpose of this research is to conduct a preliminary analysis of the more recent models used in these US empirical studies in order to identify the key study area selection criteria and success factors. The paper concludes that many of the study area selection criteria are implicit rather than explicit. By collecting data across the models, some implicit criteria become apparent, whilst others remain elusive. This data will inform future research on whether an existing model can be adopted or adapted for use in Australia.
Resumo:
Error correction is perhaps the most widely used method for responding to student writing. While various studies have investigated the effectiveness of providing error correction, there has been relatively little research incorporating teachers' beliefs, practices, and students' preferences in written error correction. The current study adopted features of an ethnographic research design in order to explore the beliefs and practices of ESL teachers, and investigate the preferences of L2 students regarding written error correction in the context of a language institute situated in the Brisbane metropolitan district. In this study, two ESL teachers and two groups of adult intermediate L2 students were interviewed and observed. The beliefs and practices of the teachers were elicited through interviews and classroom observations. The preferences of L2 students were elicited through focus group interviews. Responses of the participants were encoded and analysed. Results of the teacher interviews showed that teachers believe that providing written error correction has advantages and disadvantages. Teachers believe that providing written error correction helps students improve their proof-reading skills in order to revise their writing more efficiently. However, results also indicate that providing written error correction is very time consuming. Furthermore, teachers prefer to provide explicit written feedback strategies during the early stages of the language course, and move to a more implicit strategy of providing written error correction in order to facilitate language learning. On the other hand, results of the focus group interviews suggest that students regard their teachers' practice of written error correction as important in helping them locate their errors and revise their writing. However, students also feel that the process of providing written error correction is time consuming. Nevertheless, students want and expect their teachers to provide written feedback because they believe that the benefits they gain from receiving feedback on their writing outweigh the apparent disadvantages of their teachers' written error correction strategies.
Resumo:
PURPOSE. To assess whether there are any advantages of binocular over monocular vision under blur conditions. METHODS. We measured the effect of defocus, induced by positive lenses, on the pattern reversal Visual Evoked Potential (VEP) and on visual acuity (VA). Monocular (dominant eye) and binocular VEPs were recorded from thirteen volunteers (average age: 28±5 years, average spherical equivalent: -0.25±0.73 D) for defocus up to 2.00 D using positive powered lenses. VEPs were elicited using reversing 10 arcmin checks at a rate of 4 reversals/second. The stimulus subtended a circular field of 7 degrees with 100% contrast and mean luminance 30 cd/m2. VA was measured under the same conditions using ETDRS charts. All measurements were performed at 1m viewing distance with best spectacle sphero-cylindrical correction and natural pupils. RESULTS. With binocular stimulation, amplitudes and implicit times of the P100 component of the VEPs were greater and shorter, respectively, in all cases than for monocular stimulation. Mean binocular enhancement ratio in the P100 amplitude was 2.1 in-focus, increasing linearly with defocus to be 3.1 at +2.00 D defocus. Mean peak latency was 2.9 ms shorter in-focus with binocular than for monocular stimulation, with the difference increasing with defocus to 8.8 ms at +2.00 D. As for the VEP amplitude, VA was always better with binocular than with monocular vision, with the difference being greater for higher retinal blur. CONCLUSIONS. Both subjective and electrophysiological results show that binocular vision ameliorates the effect of defocus. The increased binocular facilitation observed with retinal blur may be due to the activation of a larger population of neurons at close-to-threshold detection under binocular stimulation.
Resumo:
With the advent of social web initiatives, some argued that these new emerging tools might be useful in tacit knowledge sharing through providing interactive and collaborative technologies. However, there is still a poverty of literature to understand how and what might be the contributions of social media in facilitating tacit knowledge sharing. Therefore, this paper is intended to theoretically investigate and map social media concepts and characteristics with tacit knowledge creation and sharing requirements. By conducting a systematic literature review, five major requirements found that need to be present in an environment that involves tacit knowledge sharing. These requirements have been analyzed against social media concepts and characteristics to see how they map together. The results showed that social media have abilities to comply some of the main requirements of tacit knowledge sharing. The relationships have been illustrated in a conceptual framework, suggesting further empirical studies to acknowledge findings of this study.
Resumo:
Individual-based models describing the migration and proliferation of a population of cells frequently restrict the cells to a predefined lattice. An implicit assumption of this type of lattice based model is that a proliferative population will always eventually fill the lattice. Here we develop a new lattice-free individual-based model that incorporates cell-to-cell crowding effects. We also derive approximate mean-field descriptions for the lattice-free model in two special cases motivated by commonly used experimental setups. Lattice-free simulation results are compared to these mean-field descriptions and to a corresponding lattice-based model. Data from a proliferation experiment is used to estimate the parameters for the new model, including the cell proliferation rate, showing that the model fits the data well. An important aspect of the lattice-free model is that the confluent cell density is not predefined, as with lattice-based models, but an emergent model property. As a consequence of the more realistic, irregular configuration of cells in the lattice-free model, the population growth rate is much slower at high cell densities and the population cannot reach the same confluent density as an equivalent lattice-based model.
Resumo:
Despite recent methodological advances in inferring the time-scale of biological evolution from molecular data, the fundamental question of whether our substitution models are sufficiently well specified to accurately estimate branch-lengths has received little attention. I examine this implicit assumption of all molecular dating methods, on a vertebrate mitochondrial protein-coding dataset. Comparison with analyses in which the data are RY-coded (AG → R; CT → Y) suggests that even rates-across-sites maximum likelihood greatly under-compensates for multiple substitutions among the standard (ACGT) NT-coded data, which has been subject to greater phylogenetic signal erosion. Accordingly, the fossil record indicates that branch-lengths inferred from the NT-coded data translate into divergence time overestimates when calibrated from deeper in the tree. Intriguingly, RY-coding led to the opposite result. The underlying NT and RY substitution model misspecifications likely relate respectively to “hidden” rate heterogeneity and changes in substitution processes across the tree, for which I provide simulated examples. Given the magnitude of the inferred molecular dating errors, branch-length estimation biases may partly explain current conflicts with some palaeontological dating estimates.
Resumo:
The recently introduced Australian Curriculum: English Version 3.0 (Australian Curriculum, Assessment and Reporting Authority [ACARA], 2012) requires students to ‘read’ multimodal text and describe the effects of structure and organisation. We begin this article by tracing the variable understandings of what reading multimodal text might entail through the Framing Paper (National Curriculum Board, 2008), the Framing Paper Consultation Report (National Curriculum Board, 2009a), the Shaping Paper (National Curriculum Board, 2009b) and Version 3.0 of the Australian Curriculum English (ACARA, 2012). Our findings show that the theoretical and descriptive framework for doing so is implicit. Drawing together multiple but internally coherent theories from the field of semiotics, we suggest one way to work towards three Year 5 learning outcomes from the reading/writing mode. The affordances of assembling a broad but explicit technical metalanguage for an informed reading of the integrated design elements of multimodal texts are noted.