254 resultados para Complexity of Distribution
Resumo:
This study sought to examine how measures of player experience used in videogame research relate to Metacritic Professional and User scores. In total, 573 participants completed an online survey, where they responded the Player Experience of Need Satisfaction (PENS) and the Game Experience Questionnaire (GEQ) in relation to their current favourite videogame. Correlations among the data indicate an overlap between the player experience constructs and the factors informing Metacritic scores. Additionally, differences emerged in the ways professionals and users appear to allocate game ratings. However, the data also provide clear evidence that Metacritic scores do not reflect the full complexity of player experience and may be misleading in some cases.
Resumo:
The literature on humour in teaching frequently defaults to a series of maxims about how it can be used most appropriately: ‘Never tease students', ‘Don't joke about sensitive issues', ‘Never use laughter for disciplinary purposes'. This paper outlines recent research into the boundaries of humour-use within teacher education, which itself forms one part of a large scale, broadly-based study into the use of humour within tertiary teaching. This particular part of the research involves semi-structured, in-depth interviews with university academics. Following the ‘benign violations' theory of humour - wherein, to be funny, a situation/statement must be some kind of a social violation, that violation must be regarded as relatively benign, and the two ideas must be held simultaneously - this paper suggests that the willingness of academics to use particular types of humour in their teaching revolves around the complexities of determining the margins of ‘the benign'. These margins are shaped in part by pedagogic limitations, but also by professional delimitations. In terms of limitations, the boundaries of humour are set by the academic environment of the university, by the characteristics of different cohorts of students, and by what those students are prepare to laugh at. In terms of delimitations, most academics are prepared to tease their student, and many are prepared to use laughter as a form of discipline, however their own humour orientation, academic seniority, and employment security play a large role in determining what kinds of humour will be used, and where boundaries will be set. The central conclusion here is that formal maxims of humour provide little more than vague strategic guidelines, largely failing to account for the complexity of teaching relationships, for the differences between student cohorts, and for the talents and standing of particular teachers.
A framework for understanding and generating integrated solutions for residential peak energy demand
Resumo:
Supplying peak energy demand in a cost effective, reliable manner is a critical focus for utilities internationally. Successfully addressing peak energy concerns requires understanding of all the factors that affect electricity demand especially at peak times. This paper is based on past attempts of proposing models designed to aid our understanding of the influences on residential peak energy demand in a systematic and comprehensive way. Our model has been developed through a group model building process as a systems framework of the problem situation to model the complexity within and between systems and indicate how changes in one element might flow on to others. It is comprised of themes (social, technical and change management options) networked together in a way that captures their influence and association with each other and also their influence, association and impact on appliance usage and residential peak energy demand. The real value of the model is in creating awareness, understanding and insight into the complexity of residential peak energy demand and in working with this complexity to identify and integrate the social, technical and change management option themes and their impact on appliance usage and residential energy demand at peak times.
Resumo:
Biomechanical analysis of sport performance provides an objective method of determining performance of a particular sporting technique. In particular, it aims to add to the understanding of the mechanisms influencing performance, characterization of athletes, and provide insights into injury predisposition. Whilst the performance in sport of able-bodied athletes is well recognised in the literature, less information and understanding is known on the complexity, constraints and demands placed on the body of an individual with a disability. This paper provides a dialogue that outlines scientific issues of performance analysis of multi-level athletes with a disability, including Paralympians. Four integrated themes are explored the first of which focuses on how biomechanics can contribute to the understanding of sport performance in athletes with a disability and how it may be used as an evidence-based tool. This latter point questions the potential for a possible cultural shift led by emergence of user-friendly instruments. The second theme briefly discusses the role of reliability of sport performance and addresses the debate of two-dimensional and three-dimensional analysis. The third theme address key biomechanical parameters and provides guidance to clinicians, and coaches on the approaches adopted using biomechanical/sport performance analysis for an athlete with a disability starting out, to the emerging and elite Paralympian. For completeness of this discourse, the final theme is based on the controversial issues on the role of assisted devices and the inclusion of Paralympians into able-bodied sport is also presented. All combined, this dialogue highlights the intricate relationship between biomechanics and training of individuals with a disability. Furthermore, it illustrates the complexity of modern training of athletes which can only lead to a better appreciation of the performances to be delivered in the London 2012 Paralympic Games
Resumo:
Many researchers in the field of civil structural health monitoring have developed and tested their methods on simple to moderately complex laboratory structures such as beams, plates, frames, and trusses. Field work has also been conducted by many researchers and practitioners on more complex operating bridges. Most laboratory structures do not adequately replicate the complexity of truss bridges. This paper presents some preliminary results of experimental modal testing and analysis of the bridge model presented in the companion paper, using the peak picking method, and compares these results with those of a simple numerical model of the structure. Three dominant modes of vibration were experimentally identified under 15 Hz. The mode shapes and order of the modes matched those of the numerical model; however, the frequencies did not match.
Resumo:
The American Association of Australasian Literary Studies (AAALS) Annual Conference, Forth Worth, Texas, 9–11 April 2015. The dark fluidity of Melbourne suburbia in Sonya Hartnett’s Butterfly Sonya Hartnett’s Butterfly (2009) is a fictional account of the suburban family life of the Coyles in 1980’s outer suburban Melbourne written from the perspective of teenager Plum Coyle. The Coyle family at first glance appear to be living a textbook example of the suburban lifestyle developed from the 19th century and sustained well into the twentieth century, in which housing design and gender roles were clearly defined and “connected with a normative heterosexuality” (Johnson 2000: 94). The Australian suburban space is also well documented as a place where people often have to contend with oppressive rigid social and cultural ideals (ie Rowse 1978, Johnson 1993, Turnbull 2008, and Flew 2011). There is a tendency to group “suburb” as one monolithic space but this paper will argue that Hartnett exposes the dark fluidity and the complexity of the term, just as she reveals that despite or perhaps because of the planned nature of suburbia, the lives that people live are often just as complex.
Resumo:
This project is a step forward in developing effective methods to mitigate voltage unbalance in urban residential networks. The method is proposed to reduce energy losses and improve quality of service in strongly unbalanced low-voltage networks. The method is based on phase swapping as well as optimal placement and sizing of Distribution Static Synchronous Compensator (D-STATCOM) using a Particle Swarm Optimisation method.
Resumo:
Next-generation sequencing techniques have revolutionized over the last decade providing researchers with low cost, high-throughput alternatives compared to the traditional Sanger sequencing methods. These sequencing techniques have rapidly evolved from first-generation to fourth-generation with very broad applications such as unravelling the complexity of the genome, in terms of genetic variations, and having a high impact on the biological field. In this review, we discuss the transition of sequencing from the second-generation to the third- and fourth-generations, and describe some of their novel biological applications. With the advancement in technology, the earlier challenges of minimal size of the instrument, flexibility of throughput, ease of data analysis and short run times are being addressed. However, the need for prospective analysis and effectiveness to test whether the knowledge of any given new variants identified has an effect on clinical outcome may need improvement.
Resumo:
Purpose Performance heterogeneity between collaborative infrastructure projects is typically examined by considering procurement systems and their governance mechanisms at static points in time. The literature neglects to consider the impact of dynamic learning capability, which is thought to reconfigure governance mechanisms over time in response to evolving market conditions. This conceptual paper proposes a new model to show how continuous joint learning of participant organisations improves project performance. Design/methodology/approach There are two stages of conceptual development. In the first stage, the management literature is analysed to explain the Standard Model of dynamic learning capability that emphasises three learning phases for organisations. This Standard Model is extended to derive a novel Circular Model of dynamic learning capability that shows a new feedback loop between performance and learning. In the second stage, the construction management literature is consulted, adding project lifecycle, stakeholder diversity and three organisational levels to the analysis, to arrive at the Collaborative Model of dynamic learning capability. Findings The Collaborative Model should enable construction organisations to successfully adapt and perform under changing market conditions. The complexity of learning cycles results in capabilities that are imperfectly imitable between organisations, explaining performance heterogeneity on projects. Originality/value The Collaborative Model provides a theoretically substantiated description of project performance, driven by the evolution of procurement systems and governance mechanisms. The Model’s empirical value will be tested in future research.
Resumo:
The mineral lamprophyllite is fundamentally a silicate based upon tetrahedral siloxane units with extensive substitution in the formula. Lamprophyllite is a complex group of sorosilicates with general chemical formula given as A2B4C2Si2O7(X)4, where the site A can be occupied by strontium, barium, sodium, and potassium; the B site is occupied by sodium, titanium, iron, manganese, magnesium, and calcium. The site C is mainly occupied by titanium or ferric iron and X includes the anions fluoride, hydroxyl, and oxide. Chemical composition shows a homogeneous phase, composed of Si, Na, Ti, and Fe. This complexity of formula is reflected in the complexity of both the Raman and infrared spectra. The Raman spectrum is characterized by intense bands at 918 and 940 cm−1. Other intense Raman bands are found at 576, 671, and 707 cm−1. These bands are assigned to the stretching and bending modes of the tetrahedral siloxane units.
Resumo:
In the past two decades, complexity thinking has emerged as an important theoretical response to the limitations of orthodox ways of understanding educational phenomena. Complexity provides ways of understanding that embrace uncertainty, non-linearity and the inevitable ‘messiness’ that is inherent in educational settings, paying attention to the ways in which the whole is greater than the sum of its parts. This is the first book to focus on complexity thinking in the context of physical education, enabling fresh ways of thinking about research, teaching, curriculum and learning. Written by a team of leading international physical education scholars, the book highlights how the considerable theoretical promise of complexity can be reflected in the actual policies, pedagogies and practices of physical education (PE). It encourages teachers, educators and researchers to embrace notions of learning that are more organic and emergent, to allow the inherent complexity of pedagogical work in PE to be examined more broadly and inclusively. In doing so, Complexity Thinking in Physical Education makes a major contribution to our understanding of pedagogy, curriculum design and development, human movement and educational practice.
Resumo:
We report the first 3D maps of genetic effects on brain fiber complexity. We analyzed HARDI brain imaging data from 90 young adult twins using an information-theoretic measure, the Jensen-Shannon divergence (JSD), to gauge the regional complexity of the white matter fiber orientation distribution functions (ODF). HARDI data were fluidly registered using Karcher means and ODF square-roots for interpol ation; each subject's JSD map was computed from the spatial coherence of the ODFs in each voxel's neighborhood. We evaluated the genetic influences on generalized fiber anisotropy (GFA) and complexity (JSD) using structural equation models (SEM). At each voxel, genetic and environmental components of data variation were estimated, and their goodness of fit tested by permutation. Color-coded maps revealed that the optimal models varied for different brain regions. Fiber complexity was predominantly under genetic control, and was higher in more highly anisotropic regions. These methods show promise for discovering factors affecting fiber connectivity in the brain.
Resumo:
The SNP-SNP interactome has rarely been explored in the context of neuroimaging genetics mainly due to the complexity of conducting approximately 10(11) pairwise statistical tests. However, recent advances in machine learning, specifically the iterative sure independence screening (SIS) method, have enabled the analysis of datasets where the number of predictors is much larger than the number of observations. Using an implementation of the SIS algorithm (called EPISIS), we used exhaustive search of the genome-wide, SNP-SNP interactome to identify and prioritize SNPs for interaction analysis. We identified a significant SNP pair, rs1345203 and rs1213205, associated with temporal lobe volume. We further examined the full-brain, voxelwise effects of the interaction in the ADNI dataset and separately in an independent dataset of healthy twins (QTIM). We found that each additional loading in the epistatic effect was associated with approximately 5% greater brain regional brain volume (a protective effect) in both the ADNI and QTIM samples.
Resumo:
Modelling fluvial processes is an effective way to reproduce basin evolution and to recreate riverbed morphology. However, due to the complexity of alluvial environments, deterministic modelling of fluvial processes is often impossible. To address the related uncertainties, we derive a stochastic fluvial process model on the basis of the convective Exner equation that uses the statistics (mean and variance) of river velocity as input parameters. These statistics allow for quantifying the uncertainty in riverbed topography, river discharge and position of the river channel. In order to couple the velocity statistics and the fluvial process model, the perturbation method is employed with a non-stationary spectral approach to develop the Exner equation as two separate equations: the first one is the mean equation, which yields the mean sediment thickness, and the second one is the perturbation equation, which yields the variance of sediment thickness. The resulting solutions offer an effective tool to characterize alluvial aquifers resulting from fluvial processes, which allows incorporating the stochasticity of the paleoflow velocity.
Resumo:
Critical illness, acute renal failure and continuous renal replacement therapy (CRRT) are associated with changes in pharmacokinetics. Initial antibiotic dose should be based on published volume of distribution and generally be at least the standard dose, as volume of distribution is usually unchanged or increased. Subsequent doses should be based on total clearance. Total clearance varies with the CRRT clearance which mainly depends on effluent flow rate, sieving coefficient/saturation coefficient. As antibiotic clearance by healthy kidneys is usually higher than clearance by CRRT, except for colistin, subsequent doses should generally be lower than given to patients without renal dysfunction. In the future therapeutic drug monitoring, together with sophisticated pharmacokinetic models taking into account the pharmacokinetic variability, may enable more appropriate individualized dosing.