907 resultados para model categories homotopy theory quillen functor equivalence derived adjunction cofibrantly generated
Resumo:
Timoshenko's shear deformation theory is widely used for the dynamical analysis of shear-flexible beams. This paper presents a comparative study of the shear deformation theory with a higher order model, of which Timoshenko's shear deformation model is a special case. Results indicate that while Timoshenko's shear deformation theory gives reasonably accurate information regarding the set of bending natural frequencies, there are considerable discrepancies in the information it gives regarding the mode shapes and dynamic response, and so there is a need to consider higher order models for the dynamical analysis of flexure of beams.
Resumo:
With the extension of the work of the preceding paper, the relativistic front form for Maxwell's equations for electromagnetism is developed and shown to be particularly suited to the description of paraxial waves. The generators of the Poincaré group in a form applicable directly to the electric and magnetic field vectors are derived. It is shown that the effect of a thin lens on a paraxial electromagnetic wave is given by a six-dimensional transformation matrix, constructed out of certain special generators of the Poincaré group. The method of construction guarantees that the free propagation of such waves as well as their transmission through ideal optical systems can be described in terms of the metaplectic group, exactly as found for scalar waves by Bacry and Cadilhac. An alternative formulation in terms of a vector potential is also constructed. It is chosen in a gauge suggested by the front form and by the requirement that the lens transformation matrix act locally in space. Pencils of light with accompanying polarization are defined for statistical states in terms of the two-point correlation function of the vector potential. Their propagation and transmission through lenses are briefly considered in the paraxial limit. This paper extends Fourier optics and completes it by formulating it for the Maxwell field. We stress that the derivations depend explicitly on the "henochromatic" idealization as well as the identification of the ideal lens with a quadratic phase shift and are heuristic to this extent.
Resumo:
Estimation of von Bertalanffy growth parameters has received considerable attention in fisheries research. Since Sainsbury (1980, Can. J. Fish. Aquat. Sci. 37: 241-247) much of this research effort has centered on accounting for individual variability in the growth parameters. In this paper we demonstrate that, in analysis of tagging data, Sainsbury's method and its derivatives do not, in general, satisfactorily account for individual variability in growth, leading to inconsistent parameter estimates (the bias does not tend to zero as sample size increases to infinity). The bias arises because these methods do not use appropriate conditional expectations as a basis for estimation. This bias is found to be similar to that of the Fabens method. Such methods would be appropriate only under the assumption that the individual growth parameters that generate the growth increment were independent of the growth parameters that generated the initial length. However, such an assumption would be unrealistic. The results are derived analytically, and illustrated with a simulation study. Until techniques that take full account of the appropriate conditioning have been developed, the effect of individual variability on growth has yet to be fully understood.
Resumo:
We propose a new model for estimating the size of a population from successive catches taken during a removal experiment. The data from these experiments often have excessive variation, known as overdispersion, as compared with that predicted by the multinomial model. The new model allows catchability to vary randomly among samplings, which accounts for overdispersion. When the catchability is assumed to have a beta distribution, the likelihood function, which is refered to as beta-multinomial, is derived, and hence the maximum likelihood estimates can be evaluated. Simulations show that in the presence of extravariation in the data, the confidence intervals have been substantially underestimated in previous models (Leslie-DeLury, Moran) and that the new model provides more reliable confidence intervals. The performance of these methods was also demonstrated using two real data sets: one with overdispersion, from smallmouth bass (Micropterus dolomieu), and the other without overdispersion, from rat (Rattus rattus).
Resumo:
Tutkimuksessa analysoidaan kaaosteorian vaikutusta kaunokirjallisuudessa ja kirjallisuudentutkimuksessa ja esitetään, että kaaosteorian roolia kirjallisuuden kentällä voidaan parhaiten ymmärtää sen avaamien käsitteiden kautta. Suoran soveltamisen sijaan kaaosteorian avulla on käyty uudenlaisia keskusteluja vanhoista aiheista ja luonnontieteestä ammennetut käsitteet ovat johtaneet aiemmin tukkeutuneiden argumenttien avaamiseen uudesta näkökulmasta käsin. Väitöskirjassa keskitytään kolmeen osa-alueeseen: kaunokirjallisen teoksen rakenteen teoretisointiin, ihmisen (erityisesti tekijän) identiteetin hahmottamiseen ja kuvailemiseen sekä fiktion ja todellisuuden suhteen pohdintaan. Tutkimuksen tarkoituksena on osoittaa, kuinka kaaosteorian kautta näitä aiheita on lähestytty niin kirjallisuustieteessä kuin kaunokirjallisissa teoksissakin. Väitöskirjan keskiössä ovat romaanikirjailija John Barthin, dramatisti Tom Stoppardin ja runoilija Jorie Grahamin teosten analyysit. Nämä kirjailijat ammentavat kaaosteoriasta keinoja käsitteellistää rakenteita, jotka ovat yhtä aikaa dynaamisia prosesseja ja hahmotettavia muotoja. Kaunokirjallisina teemoina nousevat esiin myös ihmisen paradoksaalisesti tunnistettava ja aina muuttuva identiteetti sekä lopullista haltuunottoa pakeneva, mutta silti kiehtova ja tavoiteltava todellisuus. Näiden kirjailijoiden teosten analyysin sekä teoreettisen keskustelun kautta väitöskirjassa tuodaan esiin aiemmassa tutkimuksessa varjoon jäänyt, koherenssia, ymmärrettävyyttä ja realismia painottava humanistinen näkökulma kaaosteorian merkityksestä kirjallisuudessa.
Resumo:
Information structure and Kabyle constructions Three sentence types in the Construction Grammar framework The study examines three Kabyle sentence types and their variants. These sentence types have been chosen because they code the same state of affairs but have different syntactic structures. The sentence types are Dislocated sentence, Cleft sentence, and Canonical sentence. I argue first that a proper description of these sentence types should include information structure and, second, that a description which takes into account information structure is possible in the Construction Grammar framework. The study thus constitutes a testing ground for Construction Grammar for its applicability to a less known language. It constitutes a testing ground notably because the differentiation between the three types of sentences cannot be done without information structure categories and, consequently, these categories must be integrated also in the grammatical description. The information structure analysis is based on the model outlined by Knud Lambrecht. In that model, information structure is considered as a component of sentence grammar that assures the pragmatically correct sentence forms. The work starts by an examination of the three sentence types and the analyses that have been done in André Martinet s functional grammar framework. This introduces the sentence types chosen as the object of study and discusses the difficulties related to their analysis. After a presentation of the state of the art, including earlier and more recent models, the principles and notions of Construction Grammar and of Lambrecht s model are introduced and explicated. The information structure analysis is presented in three chapters, each treating one of the three sentence types. The analyses are based on spoken language data and elicitation. Prosody is included in the study when a syntactic structure seems to code two different focus structures. In such cases, it is pertinent to investigate whether these are coded by prosody. The final chapter presents the constructions that have been established and the problems encountered in analysing them. It also discusses the impact of the study on the theories used and on the theory of syntax in general.
Resumo:
Perceiving students, science students especially, as mere consumers of facts and information belies the importance of a need to engage them with the principles underlying those facts and is counter-intuitive to the facilitation of knowledge and understanding. Traditional didactic lecture approaches need a re-think if student classroom engagement and active learning are to be valued over fact memorisation and fact recall. In our undergraduate biomedical science programs across Years 1, 2 and 3 in the Faculty of Health at QUT, we have developed an authentic learning model with an embedded suite of pedagogical strategies that foster classroom engagement and allow for active learning in the sub-discipline area of medical bacteriology. The suite of pedagogical tools we have developed have been designed to enable their translation, with appropriate fine-tuning, to most biomedical and allied health discipline teaching and learning contexts. Indeed, aspects of the pedagogy have been successfully translated to the nursing microbiology study stream at QUT. The aims underpinning the pedagogy are for our students to: (1) Connect scientific theory with scientific practice in a more direct and authentic way, (2) Construct factual knowledge and facilitate a deeper understanding, and (3) Develop and refine their higher order flexible thinking and problem solving skills, both semi-independently and independently. The mindset and role of the teaching staff is critical to this approach since for the strategy to be successful tertiary teachers need to abandon traditional instructional modalities based on one-way information delivery. Face-to-face classroom interactions between students and lecturer enable realisation of pedagogical aims (1), (2) and (3). The strategy we have adopted encourages teachers to view themselves more as expert guides in what is very much a student-focused process of scientific exploration and learning. Specific pedagogical strategies embedded in the authentic learning model we have developed include: (i) interactive lecture-tutorial hybrids or lectorials featuring teacher role-plays as well as class-level question-and-answer sessions, (ii) inclusion of “dry” laboratory activities during lectorials to prepare students for the wet laboratory to follow, (iii) real-world problem-solving exercises conducted during both lectorials and wet laboratory sessions, and (iv) designing class activities and formative assessments that probe a student’s higher order flexible thinking skills. Flexible thinking in this context encompasses analytical, critical, deductive, scientific and professional thinking modes. The strategic approach outlined above is designed to provide multiple opportunities for students to apply principles flexibly according to a given situation or context, to adapt methods of inquiry strategically, to go beyond mechanical application of formulaic approaches, and to as much as possible self-appraise their own thinking and problem solving. The pedagogical tools have been developed within both workplace (real world) and theoretical frameworks. The philosophical core of the pedagogy is a coherent pathway of teaching and learning which we, and many of our students, believe is more conducive to student engagement and active learning in the classroom. Qualitative and quantitative data derived from online and hardcopy evaluations, solicited and unsolicited student and graduate feedback, anecdotal evidence as well as peer review indicate that: (i) our students are engaging with the pedagogy, (ii) a constructivist, authentic-learning approach promotes active learning, and (iii) students are better prepared for workplace transition.
Resumo:
The Gesture of Exposure On the presentation of the work of art in the modern art exhibition The topic of this dissertation is the presentation of art works in the modern art exhibition as being the established and conventionalized form of art encounter. It investigates the possibility of a theorization of the art exhibition as a separate object for research, and attempts to examine the relationship between the art work and its presentation in a modern art exhibition. The study takes its point of departure in the area vaguely defined as exhibition studies, and in the lack of a general problematization of the analytical tools used for closer examination of the modern art exhibition. Another lacking aspect is a closer consideration of what happens to the work of art when it is exposed in an art exhibition. The aim of the dissertation is to find a set of concepts that can be used for further theorization The art exhibition is here treated, on the one hand, as an act of exposure, as a showing gesture. On the other hand, the art exhibition is seen as a spatiality, as a space that is produced in the act of showing. Both aspects are seen to be intimately involved in knowledge production. The dissertation is divided into four parts, in which different aspects of the art exhibition are analyzed using different theoretical approaches. The first part uses the archaeological model of Michel Foucault, and discusses the exhibition as a discursive formation based on communicative activity. The second part analyses the derived concepts of gesture and space. This leads to the proposition of three metaphorical spatialities the frame, the agora and the threshold which are seen as providing a possibility for a further extension of the theory of exhibitions. The third part extends the problematization of the relationship between the individual work of art and its exposure through the ideas of Walter Benjamin and Maurice Blanchot. The fourth part carries out a close reading of three presentations from the modern era in order to further examine the relationship between the work of art and its presentation, using the tools that have been developed during the study. In the concluding section, it is possible to see clearer borderlines and conditions for the development of an exhibition theory. The concepts that have been analysed and developed into tools are shown to be useful, and the examples take the discussion into a consideration of the altered premises for the encounter with the postmodern work of art.
Resumo:
The aim of this dissertation is to provide conceptual tools for the social scientist for clarifying, evaluating and comparing explanations of social phenomena based on formal mathematical models. The focus is on relatively simple theoretical models and simulations, not statistical models. These studies apply a theory of explanation according to which explanation is about tracing objective relations of dependence, knowledge of which enables answers to contrastive why and how-questions. This theory is developed further by delineating criteria for evaluating competing explanations and by applying the theory to social scientific modelling practices and to the key concepts of equilibrium and mechanism. The dissertation is comprised of an introductory essay and six published original research articles. The main theses about model-based explanations in the social sciences argued for in the articles are the following. 1) The concept of explanatory power, often used to argue for the superiority of one explanation over another, compasses five dimensions which are partially independent and involve some systematic trade-offs. 2) All equilibrium explanations do not causally explain the obtaining of the end equilibrium state with the multiple possible initial states. Instead, they often constitutively explain the macro property of the system with the micro properties of the parts (together with their organization). 3) There is an important ambivalence in the concept mechanism used in many model-based explanations and this difference corresponds to a difference between two alternative research heuristics. 4) Whether unrealistic assumptions in a model (such as a rational choice model) are detrimental to an explanation provided by the model depends on whether the representation of the explanatory dependency in the model is itself dependent on the particular unrealistic assumptions. Thus evaluating whether a literally false assumption in a model is problematic requires specifying exactly what is supposed to be explained and by what. 5) The question of whether an explanatory relationship depends on particular false assumptions can be explored with the process of derivational robustness analysis and the importance of robustness analysis accounts for some of the puzzling features of the tradition of model-building in economics. 6) The fact that economists have been relatively reluctant to use true agent-based simulations to formulate explanations can partially be explained by the specific ideal of scientific understanding implicit in the practise of orthodox economics.
Resumo:
It could be argued that advancing practice in critical care has been superseded by the advanced practice agenda. Some would suggest that advancing practice is focused on the core attributes of an individuals practice progressing onto advanced practice status. However, advancing practice is more of a process than identifiable skills and as such is often negated when viewing the development of practitioners to the advanced practice level. For example practice development initiatives can be seen as advancing practice for the masses which ensures that practitioners are following the same level of practice. The question here is; are they developing individually. To discuss the potential development of a conceptual model of knowledge integration pertinent to critical care nursing practice. In an attempt to explore the development of leading edge critical care thinking and practice, a new model for advancing practice in critical care is proposed. This paper suggests that reflection may not be the best model for advancing practice unless the individual practitioner has a sound knowledge base both theoretically and experientially. Drawing on the contemporary literature and recent doctoral research, the knowledge integration model presented here uses multiple learning strategies that are focused in practise to develop practice, for example the use of work-based learning and clinical supervision. Ongoing knowledge acquisition and its relationship with previously held theory and experience will enable individual practitioners to advance their own practice as well as being a resource for others.
Resumo:
We show that the large anomalous Hall constants of mixed-valence and Kondo-lattice systems can be understood in terms of a simple resonant-level Fermi-liquid model. Splitting of a narrow, orbitally unquenched, spin-orbit split, f resonance in a magnetic field leads to strong skew scattering of band electrons. We interpret both the anomalous signs and the strong temperature dependence of Hall mobilities in CeCu2Si2, SmB6, and CePd3 in terms of this theory.
Resumo:
Synthetic backcrossed-derived bread wheats (SBWs) from CIMMYT were grown in the Northwest of Mexico at Centro de Investigaciones Agrícolas del Noroeste (CIANO) and sites across Australia during three seasons. During three consecutive years Australia received “shipments” of different SBWs from CIMMYT for evaluation. A different set of lines was evaluated each season, as new materials became available from the CIMMYT crop enhancement program. These consisted of approximately 100 advanced lines (F7) per year. SBWs had been top and backcrossed to CIMMYT cultivars in the first two shipments and to Australian wheat cultivars in the third one. At CIANO, the SBWs were trialled under receding soil moisture conditions. We evaluated both the performance of each line across all environments and the genotype-by-environment interaction using an analysis that fits a multiplicative mixed model, adjusted for spatial field trends. Data were organised in three groups of multienvironment trials (MET) containing germplasm from shipment 1 (METShip1), 2 (METShip2), and 3 (METShip3), respectively. Large components of variance for the genotype × environment interaction were found for each MET analysis, due to the diversity of environments included and the limited replication over years (only in METShip2, lines were tested over 2 years). The average percentage of genetic variance explained by the factor analytic models with two factors was 50.3% for METShip1, 46.7% for METShip2, and 48.7% for METShip3. Yield comparison focused only on lines that were present in all locations within a METShip, or “core” SBWs. A number of core SBWs, crossed to both Australian and CIMMYT backgrounds, outperformed the local benchmark checks at sites from the northern end of the Australian wheat belt, with reduced success at more southern locations. In general, lines that succeeded in the north were different from those in the south. The moderate positive genetic correlation between CIANO and locations in the northern wheat growing region likely reflects similarities in average temperature during flowering, high evaporative demand, and a short flowering interval. We are currently studying attributes of this germplasm that may contribute to adaptation, with the aim of improving the selection process in both Mexico and Australia.
Resumo:
The thesis studies the translation process for the laws of Finland as they are translated from Finnish into Swedish. The focus is on revision practices, norms and workplace procedures. The translation process studied covers three institutions and four revisions. In three separate studies the translation process is analyzed from the perspective of the translations, the institutions and the actors. The general theoretical framework is Descriptive Translation Studies. For the analysis of revisions made in versions of the Swedish translation of Finnish laws, a model is developed covering five grammatical categories (textual revisions, syntactic revisions, lexical revisions, morphological revisions and content revisions) and four norms (legal adequacy, correct translation, correct language and readability). A separate questionnaire-based study was carried out with translators and revisers at the three institutions. The results show that the number of revisions does not decrease during the translation process, and no division of labour can be seen at the different stages. This is somewhat surprising if the revision process is regarded as one of quality control. Instead, all revisers make revisions on every level of the text. Further, the revisions do not necessarily imply errors in the translations but are often the result of revisers following different norms for legal translation. The informal structure of the institutions and its impact on communication, visibility and workplace practices was studied from the perspective of organization theory. The results show weaknesses in the communicative situation, which affect the co-operation both between institutions and individuals. Individual attitudes towards norms and their relative authority also vary, in the sense that revisers largely prioritize legal adequacy whereas translators give linguistic norms a higher value. Further, multi-professional teamwork in the institutions studied shows a kind of teamwork based on individuals and institutions doing specific tasks with only little contact with others. This shows that the established definitions of teamwork, with people co-working in close contact with each other, cannot directly be applied to the workplace procedures in the translation process studied. Three new concepts are introduced: flerstegsrevidering (multi-stage revision), revideringskedja (revision chain) and normsyn (norm attitude). The study seeks to make a contribution to our knowledge of legal translation, translation processes, institutional translation, revision practices and translation norms for legal translation. Keywords: legal translation, translation of laws, institutional translation, revision, revision practices, norms, teamwork, organizational informal structure, translation process, translation sociology, multilingual.
Resumo:
The two-dimensional,q-state (q>4) Potts model is used as a testing ground for approximate theories of first-order phase transitions. In particular, the predictions of a theory analogous to the Ramakrishnan-Yussouff theory of freezing are compared with those of ordinary mean-field (Curie-Wiess) theory. It is found that the Curie-Weiss theory is a better approximation than the Ramakrishnan-Yussouff theory, even though the former neglects all fluctuations. It is shown that the Ramakrishnan-Yussouff theory overestimates the effects of fluctuations in this system. The reasons behind the failure of the Ramakrishnan-Yussouff approximation and the suitability of using the two-dimensional Potts model as a testing ground for these theories are discussed.
Resumo:
A strong-coupling expansion for the Green's functions, self-energies, and correlation functions of the Bose-Hubbard model is developed. We illustrate the general formalism, which includes all possible (normal-phase) inhomogeneous effects in the formalism, such as disorder or a trap potential, as well as effects of thermal excitations. The expansion is then employed to calculate the momentum distribution of the bosons in the Mott phase for an infinite homogeneous periodic system at zero temperature through third order in the hopping. By using scaling theory for the critical behavior at zero momentum and at the critical value of the hopping for the Mott insulator–to–superfluid transition along with a generalization of the random-phase-approximation-like form for the momentum distribution, we are able to extrapolate the series to infinite order and produce very accurate quantitative results for the momentum distribution in a simple functional form for one, two, and three dimensions. The accuracy is better in higher dimensions and is on the order of a few percent relative error everywhere except close to the critical value of the hopping divided by the on-site repulsion. In addition, we find simple phenomenological expressions for the Mott-phase lobes in two and three dimensions which are much more accurate than the truncated strong-coupling expansions and any other analytic approximation we are aware of. The strong-coupling expansions and scaling-theory results are benchmarked against numerically exact quantum Monte Carlo simulations in two and three dimensions and against density-matrix renormalization-group calculations in one dimension. These analytic expressions will be useful for quick comparison of experimental results to theory and in many cases can bypass the need for expensive numerical simulations.