991 resultados para Computational sciences
Resumo:
Nondeclarative memory and novelty processing in the brain is an actively studied field of neuroscience, and reducing neural activity with repetition of a stimulus (repetition suppression) is a commonly observed phenomenon. Recent findings of an opposite trend specifically, rising activity for unfamiliar stimuli—question the generality of repetition suppression and stir debate over the underlying neural mechanisms. This letter introduces a theory and computational model that extend existing theories and suggests that both trends are, in principle, the rising and falling parts of an inverted U-shaped dependence of activity with respect to stimulus novelty that may naturally emerge in a neural network with Hebbian learning and lateral inhibition. We further demonstrate that the proposed model is sufficient for the simulation of dissociable forms of repetition priming using real-world stimuli. The results of our simulation also suggest that the novelty of stimuli used in neuroscientific research must be assessed in a particularly cautious way. The potential importance of the inverted-U in stimulus processing and its relationship to the acquisition of knowledge and competencies in humans is also discussed
Resumo:
RNase S is a complex consisting of two proteolytic fragments of RNase A: the S peptide (residues 1-20) and S protein (residues 21-124). RNase S and RNase A have very similar X-ray structures and enzymatic activities. previous experiments have shown increased rates of hydrogen exchange and greater sensitivity to tryptic cleavage for RNase S relative to RNase A. It has therefore been asserted that the RNase S complex is considerably more dynamically flexible than RNase A. In the present study we examine the differences in the dynamics of RNaseS and RNase A computationally, by MD simulations, and experimentally, using trypsin cleavage as a probe of dynamics. The fluctuations around the average solution structure during the simulation were analyzed by measuring the RMS deviation in coordinates. No significant differences between RNase S and RNase A dynamics were observed in the simulations. We were able to account for the apparent discrepancy between simulation and experiment by a simple model, According to this model, the experimentally observed differences in dynamics can be quantitatively explained by the small amounts of free S peptide and S protein that are present in equilibrium with the RNase S complex. Thus, folded RNase A and the RNase S complex have identical dynamic behavior, despite the presence of a break in polypeptide chain between residues 20 and 21 in the latter molecule. This is in contrast to what has been widely believed for over 30 years about this important fragment complementation system.
Resumo:
In the past few decades, the humanities and social sciences have developed new methods of reorienting their conceptual frameworks in a “world without frontiers.” In this book, Bernadette M. Baker offers an innovative approach to rethinking sciences of mind as they formed at the turn of the twentieth century, via the concerns that have emerged at the turn of the twenty-first. The less-visited texts of Harvard philosopher and psychologist William James provide a window into contemporary debates over principles of toleration, anti-imperial discourse, and the nature of ethics. Baker revisits Jamesian approaches to the formation of scientific objects including the child mind, exceptional mental states, and the ghost to explore the possibilities and limits of social scientific thought dedicated to mind development and discipline formation around the construct of the West.
Resumo:
Turbulent mixed convection flow and heat transfer in a shallow enclosure with and without partitions and with a series of block-like heat generating components is studied numerically for a range of Reynolds and Grashof numbers with a time-dependent formulation. The flow and temperature distributions are taken to be two-dimensional. Regions with the same velocity and temperature distributions can be identified assuming repeated placement of the blocks and fluid entry and exit openings at regular distances, neglecting the end wall effects. One half of such module is chosen as the computational domain taking into account the symmetry about the vertical centreline. The mixed convection inlet velocity is treated as the sum of forced and natural convection components, with the individual components delineated based on pressure drop across the enclosure. The Reynolds number is based on forced convection velocity. Turbulence computations are performed using the standard k– model and the Launder–Sharma low-Reynolds number k– model. The results show that higher Reynolds numbers tend to create a recirculation region of increasing strength in the core region and that the effect of buoyancy becomes insignificant beyond a Reynolds number of typically 5×105. The Euler number in turbulent flows is higher by about 30 per cent than that in the laminar regime. The dimensionless inlet velocity in pure natural convection varies as Gr1/3. Results are also presented for a number of quantities of interest such as the flow and temperature distributions, Nusselt number, pressure drop and the maximum dimensionless temperature in the block, along with correlations.
Resumo:
Cardiovascular disease is the leading causes of death in the developed world. Wall shear stress (WSS) is associated with the initiation and progression of atherogenesis. This study combined the recent advances in MR imaging and computational fluid dynamics (CFD) and evaluated the patient-specific carotid bifurcation. The patient was followed up for 3 years. The geometry changes (tortuosity, curvature, ICA/CCA area ratios, central to the cross-sectional curvature, maximum stenosis) and the CFD factors (Velocity distribute, Wall Shear Stress (WSS) and Oscillatory Shear Index (OSI)) were compared at different time points.The carotid stenosis was a slight increase in the central to the cross-sectional curvature, and it was minor and variable curvature changes for carotid centerline. The OSI distribution presents ahigh-values in the same region where carotid stenosis and normal border, indicating complex flow and recirculation.The significant geometric changes observed during the follow-up may also cause significant changes in bifurcation hemodynamics.
Resumo:
To help with the clinical screening and diagnosis of abdominal aortic aneurysm (AAA), we evaluated the effect of inflow angle (IA) and outflow bifurcation angle (BA) on the distribution of blood flow and wall shear stress (WSS) in an idealized AAA model. A 2D incompressible Newtonian flow is assumed and the computational simulation is performed using finite volume method. The results showed that the largest WSS often located at the proximal and the distal end of the AAA. An increase in IA resulted in an increase in maximum WSS. We also found that WSS was maximal when BA was 90°. IA and BA are two important geometrical factors, they may help with AAA risk assessment along with the commonly used AAA diameter.
Resumo:
Objective: To compare the differences in the hemodynamic parameters of abdominal aortic aneurysm (AAA) between fluid-structure interaction model (FSIM) and fluid-only model (FM), so as to discuss their application in the research of AAA. Methods: An idealized AAA model was created based on patient-specific AAA data. In FM, the flow, pressure and wall shear stress (WSS) were computed using finite volume method. In FSIM, an Arbitrary Lagrangian-Eulerian algorithm was used to solve the flow in a continuously deforming geometry. The hemodynamic parameters of both models were obtained for discussion. Results: Under the same inlet velocity, there were only two symmetrical vortexes in the AAA dilation area for FSIM. In contrast, four recirculation areas existed in FM; two were main vortexes and the other two were secondary flow, which were located between the main recirculation area and the arterial wall. Six local pressure concentrations occurred in the distal end of AAA and the recirculation area for FM. However, there were only two local pressure concentrations in FSIM. The vortex center of the recirculation area in FSIM was much more close to the distal end of AAA and the area was much larger because of AAA expansion. Four extreme values of WSS existed at the proximal of AAA, the point of boundary layer separation, the point of flow reattachment and the distal end of AAA, respectively, in both FM and FSIM. The maximum wall stress and the largest wall deformation were both located at the proximal and distal end of AAA. Conclusions: The number and center of the recirculation area for both models are different, while the change of vortex is closely associated with the AAA growth. The largest WSS of FSIM is 36% smaller than that of FM. Both the maximum wall stress and largest wall displacement shall increase with the outlet pressure increasing. FSIM needs to be considered for studying the relationship between AAA growth and shear stress.
Resumo:
Selection criteria and misspecification tests for the intra-cluster correlation structure (ICS) in longitudinal data analysis are considered. In particular, the asymptotical distribution of the correlation information criterion (CIC) is derived and a new method for selecting a working ICS is proposed by standardizing the selection criterion as the p-value. The CIC test is found to be powerful in detecting misspecification of the working ICS structures, while with respect to the working ICS selection, the standardized CIC test is also shown to have satisfactory performance. Some simulation studies and applications to two real longitudinal datasets are made to illustrate how these criteria and tests might be useful.
Resumo:
This article develops a method for analysis of growth data with multiple recaptures when the initial ages for all individuals are unknown. The existing approaches either impute the initial ages or model them as random effects. Assumptions about the initial age are not verifiable because all the initial ages are unknown. We present an alternative approach that treats all the lengths including the length at first capture as correlated repeated measures for each individual. Optimal estimating equations are developed using the generalized estimating equations approach that only requires the first two moment assumptions. Explicit expressions for estimation of both mean growth parameters and variance components are given to minimize the computational complexity. Simulation studies indicate that the proposed method works well. Two real data sets are analyzed for illustration, one from whelks (Dicathais aegaota) and the other from southern rock lobster (Jasus edwardsii) in South Australia.
Resumo:
Kirjallisuuden- ja kulttuurintutkimus on viimeisten kolmen vuosikymmenen aikana tullut yhä enenevässä määrin tietoiseksi tieteen ja taiteen suhteen monimutkaisesta luonteesta. Nykyään näiden kahden kulttuurin tutkimus muodostaa oman kenttänsä, jolla niiden suhdetta tarkastellaan ennen kaikkea dynaamisena vuorovaikutuksena, joka heijastaa kulttuurimme kieltä, arvoja ja ideologisia sisältöjä. Toisin kuin aiemmat näkemykset, jotka pitävät tiedettä ja taidetta toisilleen enemmän tai vähemmän vastakkaisina pyrkimyksinä, nykytutkimus lähtee oletuksesta, jonka mukaan ne ovat kulttuurillisesti rakentuneita diskursseja, jotka kohtaavat usein samankaltaisia todellisuuden mallintamiseen liittyviä ongelmia, vaikka niiden käyttämät metodit eroavatkin toisistaan. Väitöskirjani keskittyy yllä mainitun suhteen osa-alueista popularisoidun tietokirjallisuuden (muun muassa Paul Davies, James Gleick ja Richard Dawkins) käyttämän kielen ja luonnontieteistä ideoita ammentavan kaunokirjallisuuden (muun muassa Jeanette Winterson, Tom Stoppard ja Richard Powers) hyödyntämien keinojen tarkasteluun nojautuen yli 30 teoksen kattavaa aineistoa koskevaan tyylin ja teemojen tekstianalyysiin. Populaarin tietokirjallisuuden osalta tarkoituksenani on osoittaa, että sen käyttämä kieli rakentuu huomattavassa määrin sellaisille rakenteille, jotka tarjoavat mahdollisuuden esittää todellisuutta koskevia argumentteja mahdollisimman vakuuttavalla tavalla. Tässä tehtävässä monilla klassisen retoriikan määrittelemillä kuvioilla on tärkeä rooli, koska ne auttavat liittämään sanotun sisällön ja muodon tiukasti toisiinsa: retoristen kuvioiden käyttö ei näin ollen edusta pelkkää tyylikeinoa, vaan se myös usein kiteyttää argumenttien taustalla olevat tieteenfilosofiset olettamukset ja auttaa vakiinnuttamaan argumentoinnin logiikan. Koska monet aikaisemmin ilmestyneistä tutkimuksista ovat keskittyneet pelkästään metaforan rooliin tieteellisissä argumenteissa, tämä väitöskirja pyrkii laajentamaan tutkimuskenttää analysoimalla myös toisenlaisten kuvioiden käyttöä. Osoitan myös, että retoristen kuvioiden käyttö muodostaa yhtymäkohdan tieteellisiä ideoita hyödyntävään kaunokirjallisuuteen. Siinä missä popularisoitu tiede käyttää retoriikkaa vahvistaakseen sekä argumentatiivisia että kaunokirjallisia ominaisuuksiaan, kuvaa tällainen sanataide tiedettä tavoilla, jotka usein heijastelevat tietokirjallisuuden kielellisiä rakenteita. Toisaalta on myös mahdollista nähdä, miten kaunokirjallisuuden keinot heijastuvat popularisoidun tieteen kerrontatapoihin ja kieleen todistaen kahden kulttuurin dynaamisesta vuorovaikutuksesta. Nykyaikaisen populaaritieteen retoristen elementtien ja kaunokirjallisuuden keinojen vertailu näyttää lisäksi, kuinka tiede ja taide osallistuvat keskusteluun kulttuurimme tiettyjen peruskäsitteiden kuten identiteetin, tiedon ja ajan merkityksestä. Tällä tavoin on mahdollista nähdä, että molemmat ovat perustavanlaatuisia osia merkityksenantoprosessissa, jonka kautta niin tieteelliset ideat kuin ihmiselämän suuret kysymyksetkin saavat kulttuurillisesti rakentuneen merkityksensä.
Resumo:
The paper presents two new algorithms for the direct parallel solution of systems of linear equations. The algorithms employ a novel recursive doubling technique to obtain solutions to an nth-order system in n steps with no more than 2n(n −1) processors. Comparing their performance with the Gaussian elimination algorithm (GE), we show that they are almost 100% faster than the latter. This speedup is achieved by dispensing with all the computation involved in the back-substitution phase of GE. It is also shown that the new algorithms exhibit error characteristics which are superior to GE. An n(n + 1) systolic array structure is proposed for the implementation of the new algorithms. We show that complete solutions can be obtained, through these single-phase solution methods, in 5n−log2n−4 computational steps, without the need for intermediate I/O operations.
Resumo:
Perceiving students, science students especially, as mere consumers of facts and information belies the importance of a need to engage them with the principles underlying those facts and is counter-intuitive to the facilitation of knowledge and understanding. Traditional didactic lecture approaches need a re-think if student classroom engagement and active learning are to be valued over fact memorisation and fact recall. In our undergraduate biomedical science programs across Years 1, 2 and 3 in the Faculty of Health at QUT, we have developed an authentic learning model with an embedded suite of pedagogical strategies that foster classroom engagement and allow for active learning in the sub-discipline area of medical bacteriology. The suite of pedagogical tools we have developed have been designed to enable their translation, with appropriate fine-tuning, to most biomedical and allied health discipline teaching and learning contexts. Indeed, aspects of the pedagogy have been successfully translated to the nursing microbiology study stream at QUT. The aims underpinning the pedagogy are for our students to: (1) Connect scientific theory with scientific practice in a more direct and authentic way, (2) Construct factual knowledge and facilitate a deeper understanding, and (3) Develop and refine their higher order flexible thinking and problem solving skills, both semi-independently and independently. The mindset and role of the teaching staff is critical to this approach since for the strategy to be successful tertiary teachers need to abandon traditional instructional modalities based on one-way information delivery. Face-to-face classroom interactions between students and lecturer enable realisation of pedagogical aims (1), (2) and (3). The strategy we have adopted encourages teachers to view themselves more as expert guides in what is very much a student-focused process of scientific exploration and learning. Specific pedagogical strategies embedded in the authentic learning model we have developed include: (i) interactive lecture-tutorial hybrids or lectorials featuring teacher role-plays as well as class-level question-and-answer sessions, (ii) inclusion of “dry” laboratory activities during lectorials to prepare students for the wet laboratory to follow, (iii) real-world problem-solving exercises conducted during both lectorials and wet laboratory sessions, and (iv) designing class activities and formative assessments that probe a student’s higher order flexible thinking skills. Flexible thinking in this context encompasses analytical, critical, deductive, scientific and professional thinking modes. The strategic approach outlined above is designed to provide multiple opportunities for students to apply principles flexibly according to a given situation or context, to adapt methods of inquiry strategically, to go beyond mechanical application of formulaic approaches, and to as much as possible self-appraise their own thinking and problem solving. The pedagogical tools have been developed within both workplace (real world) and theoretical frameworks. The philosophical core of the pedagogy is a coherent pathway of teaching and learning which we, and many of our students, believe is more conducive to student engagement and active learning in the classroom. Qualitative and quantitative data derived from online and hardcopy evaluations, solicited and unsolicited student and graduate feedback, anecdotal evidence as well as peer review indicate that: (i) our students are engaging with the pedagogy, (ii) a constructivist, authentic-learning approach promotes active learning, and (iii) students are better prepared for workplace transition.
Resumo:
The aim of this dissertation is to provide conceptual tools for the social scientist for clarifying, evaluating and comparing explanations of social phenomena based on formal mathematical models. The focus is on relatively simple theoretical models and simulations, not statistical models. These studies apply a theory of explanation according to which explanation is about tracing objective relations of dependence, knowledge of which enables answers to contrastive why and how-questions. This theory is developed further by delineating criteria for evaluating competing explanations and by applying the theory to social scientific modelling practices and to the key concepts of equilibrium and mechanism. The dissertation is comprised of an introductory essay and six published original research articles. The main theses about model-based explanations in the social sciences argued for in the articles are the following. 1) The concept of explanatory power, often used to argue for the superiority of one explanation over another, compasses five dimensions which are partially independent and involve some systematic trade-offs. 2) All equilibrium explanations do not causally explain the obtaining of the end equilibrium state with the multiple possible initial states. Instead, they often constitutively explain the macro property of the system with the micro properties of the parts (together with their organization). 3) There is an important ambivalence in the concept mechanism used in many model-based explanations and this difference corresponds to a difference between two alternative research heuristics. 4) Whether unrealistic assumptions in a model (such as a rational choice model) are detrimental to an explanation provided by the model depends on whether the representation of the explanatory dependency in the model is itself dependent on the particular unrealistic assumptions. Thus evaluating whether a literally false assumption in a model is problematic requires specifying exactly what is supposed to be explained and by what. 5) The question of whether an explanatory relationship depends on particular false assumptions can be explored with the process of derivational robustness analysis and the importance of robustness analysis accounts for some of the puzzling features of the tradition of model-building in economics. 6) The fact that economists have been relatively reluctant to use true agent-based simulations to formulate explanations can partially be explained by the specific ideal of scientific understanding implicit in the practise of orthodox economics.