937 resultados para Multi-body Dynamics


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous research has highlighted the importance of positive physical activity (PA) behaviors during childhood to promote sustained active lifestyles throughout the lifespan (Telama et al. 2005; 2014). It is in this context that the role of schools and teachers in facilitating PA education is promoted. Research suggests that teachers play an important role in the attitudes of children towards PA (Figley 1985) and schools may be an efficient vehicle for PA provision and promotion (McGinnis, Kanner and DeGraw, 1991; Wechsler, Deveraux, Davis and Collins, 2000). Yet despite consensus that schools represent an ideal setting from which to ‘reach’ young people (Department of Health and Human Services, UK, 2012) there remains conceptual (e.g. multi-component intervention) and methodological (e.g. duration, intensity, family involvement) ambiguity regarding the mechanisms of change claimed by PA intervention programmes. This may, in part, contribute to research findings that suggest that PA interventions have had limited impact on children’s overall activity levels and thereby limited impact in reducing children’s metabolic health (Metcalf, Henley & Wilkin, 2012). A marked criticism of the health promotion field has been the focus on behavioural change while failing to acknowledge the impact of context in influencing health outcomes (Golden & Earp, 2011). For years, the trans-theoretical model of behaviour change has been ‘the dominant model for health behaviour change’ (Armitage, 2009); this model focusses primarily on the individual and the psychology of the change process. Arguably, this model is limited by the individual’s decision-making ability and degree of self-efficacy in order to achieve sustained behavioural change and does not take account of external factors that may hinder their ability to realise change. Similar to the trans-theoretical model, socio-ecological models identify the individual at the focal point of change but also emphasises the importance of connecting multiple impacting variables, in particular, the connections between the social environment, the physical environment and public policy in facilitating behavioural change (REF). In this research, a social-ecological framework was used to connect the ways a PA intervention programme had an impact (or not) on participants, and to make explicit the foundational features of the programme that facilitated positive change. In this study, we examined the evaluation of a multi-agency approach to a PA intervention programme which aimed to increase physical activity, and awareness of the importance of physical activity to key stage 2 (age 7-12) pupils in three UK primary schools. The agencies involved were the local health authority, a community based charitable organisation, a local health administrative agency, and the city school district. In examining the impact of the intervention, we adopted a process evaluation model in order to better understand the mechanisms and context that facilitated change. Therefore, the aim of this evaluation was to describe the provision, process and impact of the intervention by 1) assessing changes in physical activity levels 2) assessing changes in the student’s attitudes towards physical activity, 3) examining student’s perceptions of the child size fitness equipment in school and their likelihood of using the equipment outside of school and 4) exploring staff perceptions, specifically the challenges and benefits, of facilitating equipment based exercise sessions in the school environment. Methodology, Methods, Research Instruments or Sources Used Evaluation of the intervention was designed as a matched-control study and was undertaken over a seven-month period. The school-based intervention involved 3 intervention schools (n =436; 224 boys) and one control school (n=123; 70 boys) in a low socioeconomic and multicultural urban setting. The PA intervention was separated into two phases: a motivation DVD and 10 days of circuit based exercise sessions (Phase 1) followed by a maintenance phase (Phase 2) that incorporated a PA reward program and the use of specialist kid’s gym equipment located at each school for a period of 4 wk. Outcome measures were measured at baseline (January) and endpoint (July; end of academic school year) using reliable and valid self-report measures. The children’s attitudes towards PA were assessed using the Children’s Attitudes towards Physical Activity (CATPA) questionnaire. The Physical Activity Questionnaire for Children (PAQ-C), a 7-day recall questionnaire, was used to assess PA levels over a school week. A standardised test battery (Fitnessgram®) was used to assess cardiovascular fitness, body composition, muscular strength and endurance, and flexibility. After the 4 wk period, similar kid’s equipment was available for general access at local community facilities. The control school did not receive any of the interventions. All physical fitness tests and PA questionnaires were administered and collected prior to the start of the intervention (January) and following the intervention period (July) by an independent evaluation team. Evaluation testing took place at the individual schools over 2-3 consecutive days (depending on the number of children to be tested at the school). Staff (n=19) and student perceptions (n = 436) of the child sized fitness equipment were assessed via questionnaires post-intervention. Students completed a questionnaire to assess enjoyment, usage, ease of use and equipment assess and usage in the community. A questionnaire assessed staff perceptions on the delivery of the exercise sessions, classroom engagement and student perceptions. Conclusions, Expected Outcomes or Findings Findings showed that both the intervention (16.4%) and control groups increased their PAQ-C score by post-intervention (p < 0.05); with the intervention (17.8%) and control (21.3%) boys showing the greatest increase in physical activity levels. At post-intervention, there was a 5.5% decline in the intervention girls’ attitudes toward PA in the aesthetic subdomains (p = 0.009); whereas the control boys had an increase in positive attitudes in the health domain (p = 0.003). No significant differences in attitudes towards physical activity were observed in any other domain for either group at post-intervention (p > 0.05). The results of the equipment questionnaire, 96% of the children stated they enjoyed using the equipment and would like to use the equipment again in the future; however at post-intervention only 27% reported using the equipment outside of school in the last 7 days. Students identified the ski walker (34%) and cycle (32%) as their favorite pieces of equipment; with the single joint exercises such as leg extension and bicep/tricep machine (<3%) as their least favorite. Key themes from staff were that the equipment sessions were enjoyable, a novel activity, children felt very grown-up, and the activity was linked to a real fitness experience. They also expressed the need for more support to deliver the sessions and more time required for each session. Findings from this study suggest that a more integrated approach within the various agencies is required, particularly more support to increase teachers pedagogical content knowledge in physical activity instruction which is age appropriate. Future recommendations for successful implementation include sufficient time period for all students to access and engage with the equipment; increased access and marketing of facilities to parents within the local community, and professional teacher support strategies to facilitate the exercise sessions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Unstructured mesh codes for modelling continuum physics phenomena have evolved to provide the facility to model complex interacting systems. Parallelisation of such codes using single Program Multi Data (SPMD) domain decomposition techniques implemented with message passing has been demonstrated to provide high parallel efficiency, scalability to large numbers of processors P and portability across a wide range of parallel platforms. High efficiency, especially for large P requires that load balance is achieved in each parallel loop. For a code in which loops span a variety of mesh entity types, for example, elements, faces and vertices, some compromise is required between load balance for each entity type and the quantity of inter-processor communication required to satisfy data dependence between processors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces systems of exchange values as tools for the organization of multi-agent systems. Systems of exchange values are defined on the basis of the theory of social exchanges, developed by Piaget and Homans. A model of social organization is proposed, where social relations are construed as social exchanges and exchange values are put into use in the support of the continuity of the performance of social exchanges. The dynamics of social organizations is formulated in terms of the regulation of exchanges of values, so that social equilibrium is connected to the continuity of the interactions. The concept of supervisor of social equilibrium is introduced as a centralized mechanism for solving the problem of the equilibrium of the organization The equilibrium supervisor solves such problem making use of a qualitative Markov Decision Process that uses numerical intervals for the representation of exchange values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding how biodiversity spatially distribute over both the short term and long term, and what factors are affecting the distribution, are critical for modeling the spatial pattern of biodiversity as well as for promoting effective conservation planning and practices. This dissertation aims to examine factors that influence short-term and long-term avian distribution from the geographical sciences perspective. The research develops landscape level habitat metrics to characterize forest height heterogeneity and examines their efficacies in modelling avian richness at the continental scale. Two types of novel vegetation-height-structured habitat metrics are created based on second order texture algorithms and the concepts of patch-based habitat metrics. I correlate the height-structured metrics with the richness of different forest guilds, and also examine their efficacies in multivariate richness models. The results suggest that height heterogeneity, beyond canopy height alone, supplements habitat characterization and richness models of two forest bird guilds. The metrics and models derived in this study demonstrate practical examples of utilizing three-dimensional vegetation data for improved characterization of spatial patterns in species richness. The second and the third projects focus on analyzing centroids of avian distributions, and testing hypotheses regarding the direction and speed of these shifts. I first showcase the usefulness of centroids analysis for characterizing the distribution changes of a few case study species. Applying the centroid method on 57 permanent resident bird species, I show that multi-directional distribution shifts occurred in large number of studied species. I also demonstrate, plain birds are not shifting their distribution faster than mountain birds, contrary to the prediction based on climate change velocity hypothesis. By modelling the abundance change rate at regional level, I show that extreme climate events and precipitation measures associate closely with some of the long-term distribution shifts. This dissertation improves our understanding on bird habitat characterization for species richness modelling, and expands our knowledge on how avian populations shifted their ranges in North America responding to changing environments in the past four decades. The results provide an important scientific foundation for more accurate predictive species distribution modeling in future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Topography is often thought as exclusively linked to mountain ranges formed by plates collision. It is now, however, known that apart from compression, uplift and denudation of rocks may be triggered by rifting, like it happens at elevated passive margins, and away from plate boundaries by both intra-plate stress causing reactivation of older structures, and by epeirogenic movements driven by mantle dynamics and initiating long-wavelength uplift. In the Cenozoic, central west Britain and other parts of the North Atlantic margins experienced multiple episodes of rock uplift and denudation that have been variable both at spatial and temporal scales. The origin of topography in central west Britain is enigmatic, and because of its location, it may be related to any of the processes mentioned above. In this study, three low temperature thermochronometers, the apatite fission track (AFT) and apatite and zircon (U-Th-Sm)/He (AHe and ZHe, respectively) methods were used to establish the rock cooling history from 200◦C to 30◦C. The samples were collected from the intrusive rocks in the high elevation, high relief regions of the Lake District (NW England), southern Scotland and northern Wales. AFT ages from the region are youngest (55–70 Ma) in the Lake District and increase northwards into southern Scotland and southwards in north Wales (>200 Ma). AHe and ZHe ages show no systematic pattern; the former range from 50 to 80 Ma and the latter tend to record the post-emplacement cooling of the intrusions (200–400 Ma). The complex, multi-thermochronometric inverse modelling suggests a ubiquitous, rapid Late Cretaceous/early Palaeogene cooling event that is particularly marked in Lake District and Criffell. The timing and rate of cooling in southern Scotland and in northern Wales is poorly resolved as the amount of cooling was less than 60◦C. The Lake District plutons were at >110◦C prior to the early Palaeogene; cooling due to a combined effect of high heat flow, from the heat producing granite batholith, and the blanketing effect of the overlying low conductivity Late Mesozoic limestones and mudstones. Modelling of the heat transfer suggests that this combination produced an elevated geothermal gradient within the sedimentary rocks (50–70◦C/km) that was about two times higher than at the present day. Inverse modelling of the AFT and AHe data taking the crustal structure into consideration suggests that denudation was the highest, 2.0–2.5 km, in the coastal areas of the Lake District and southern Scotland, gradually decreasing to less than 1 km in the northern Southern Uplands and northern Wales. Both the rift-related uplift and the intra-plate compression poorly correlate with the timing, location and spatial distribution of the early Palaeogene denudation. The pattern of early Palaeogene denudation correlates with the thickness of magmatic underplating, if the changes of mean topography, Late Cretaceous water depth and eroded rock density are taken into consideration. However, the uplift due to underplating alone cannot fully justify the total early Palaeogene denudation. The amount that is not ex- plained by underplating is, however, roughly spatially constant across the study area and can be referred to the transient thermal uplift induced by the mantle plume arrival. No other mechanisms are required to explain the observed pattern of denudation. The onset of denudation across the region is not uniform. Denudation started at 70–75 Ma in the central part of the Lake District whereas the coastal areas the rapid erosion appears to have initiated later (65–60 Ma). This is ~10 Ma earlier than the first vol- canic manifestation of the proto-Iceland plume and favours the hypothesis of the short period of plume incubation below the lithosphere before the volcanism. In most of the localities, the rocks had cooled to temperatures lower than 30◦C by the end of the Palaeogene, suggesting that the total Neogene denudation was, at a maximum, several hundreds of metres. Rapid cooling in the last 3 million years is resolved in some places in southern Scotland, where it could be explained by glacial erosion and post-glacial isostatic uplift.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis proves certain results concerning an important question in non-equilibrium quantum statistical mechanics which is the derivation of effective evolution equations approximating the dynamics of a system of large number of bosons initially at equilibrium (ground state at very low temperatures). The dynamics of such systems are governed by the time-dependent linear many-body Schroedinger equation from which it is typically difficult to extract useful information due to the number of particles being large. We will study quantitatively (i.e. with explicit bounds on the error) how a suitable one particle non-linear Schroedinger equation arises in the mean field limit as number of particles N → ∞ and how the appropriate corrections to the mean field will provide better approximations of the exact dynamics. In the first part of this thesis we consider the evolution of N bosons, where N is large, with two-body interactions of the form N³ᵝv(Nᵝ⋅), 0≤β≤1. The parameter β measures the strength and the range of interactions. We compare the exact evolution with an approximation which considers the evolution of a mean field coupled with an appropriate description of pair excitations, see [18,19] by Grillakis-Machedon-Margetis. We extend the results for 0 ≤ β < 1/3 in [19, 20] to the case of β < 1/2 and obtain an error bound of the form p(t)/Nᵅ, where α>0 and p(t) is a polynomial, which implies a specific rate of convergence as N → ∞. In the second part, utilizing estimates of the type discussed in the first part, we compare the exact evolution with the mean field approximation in the sense of marginals. We prove that the exact evolution is close to the approximate in trace norm for times of the order o(1)√N compared to log(o(1)N) as obtained in Chen-Lee-Schlein [6] for the Hartree evolution. Estimates of similar type are obtained for stronger interactions as well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this talk, we propose an all regime Lagrange-Projection like numerical scheme for the gas dynamics equations. By all regime, we mean that the numerical scheme is able to compute accurate approximate solutions with an under-resolved discretization with respect to the Mach number M, i.e. such that the ratio between the Mach number M and the mesh size or the time step is small with respect to 1. The key idea is to decouple acoustic and transport phenomenon and then alter the numerical flux in the acoustic approximation to obtain a uniform truncation error in term of M. This modified scheme is conservative and endowed with good stability properties with respect to the positivity of the density and the internal energy. A discrete entropy inequality under a condition on the modification is obtained thanks to a reinterpretation of the modified scheme in the Harten Lax and van Leer formalism. A natural extension to multi-dimensional problems discretized over unstructured mesh is proposed. Then a simple and efficient semi implicit scheme is also proposed. The resulting scheme is stable under a CFL condition driven by the (slow) material waves and not by the (fast) acoustic waves and so verifies the all regime property. Numerical evidences are proposed and show the ability of the scheme to deal with tests where the flow regime may vary from low to high Mach values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When designing systems that are complex, dynamic and stochastic in nature, simulation is generally recognised as one of the best design support technologies, and a valuable aid in the strategic and tactical decision making process. A simulation model consists of a set of rules that define how a system changes over time, given its current state. Unlike analytical models, a simulation model is not solved but is run and the changes of system states can be observed at any point in time. This provides an insight into system dynamics rather than just predicting the output of a system based on specific inputs. Simulation is not a decision making tool but a decision support tool, allowing better informed decisions to be made. Due to the complexity of the real world, a simulation model can only be an approximation of the target system. The essence of the art of simulation modelling is abstraction and simplification. Only those characteristics that are important for the study and analysis of the target system should be included in the simulation model. The purpose of simulation is either to better understand the operation of a target system, or to make predictions about a target system’s performance. It can be viewed as an artificial white-room which allows one to gain insight but also to test new theories and practices without disrupting the daily routine of the focal organisation. What you can expect to gain from a simulation study is very well summarised by FIRMA (2000). His idea is that if the theory that has been framed about the target system holds, and if this theory has been adequately translated into a computer model this would allow you to answer some of the following questions: · Which kind of behaviour can be expected under arbitrarily given parameter combinations and initial conditions? · Which kind of behaviour will a given target system display in the future? · Which state will the target system reach in the future? The required accuracy of the simulation model very much depends on the type of question one is trying to answer. In order to be able to respond to the first question the simulation model needs to be an explanatory model. This requires less data accuracy. In comparison, the simulation model required to answer the latter two questions has to be predictive in nature and therefore needs highly accurate input data to achieve credible outputs. These predictions involve showing trends, rather than giving precise and absolute predictions of the target system performance. The numerical results of a simulation experiment on their own are most often not very useful and need to be rigorously analysed with statistical methods. These results then need to be considered in the context of the real system and interpreted in a qualitative way to make meaningful recommendations or compile best practice guidelines. One needs a good working knowledge about the behaviour of the real system to be able to fully exploit the understanding gained from simulation experiments. The goal of this chapter is to brace the newcomer to the topic of what we think is a valuable asset to the toolset of analysts and decision makers. We will give you a summary of information we have gathered from the literature and of the experiences that we have made first hand during the last five years, whilst obtaining a better understanding of this exciting technology. We hope that this will help you to avoid some pitfalls that we have unwittingly encountered. Section 2 is an introduction to the different types of simulation used in Operational Research and Management Science with a clear focus on agent-based simulation. In Section 3 we outline the theoretical background of multi-agent systems and their elements to prepare you for Section 4 where we discuss how to develop a multi-agent simulation model. Section 5 outlines a simple example of a multi-agent system. Section 6 provides a collection of resources for further studies and finally in Section 7 we will conclude the chapter with a short summary.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several models have been studied on predictive epidemics of arthropod vectored plant viruses in an attempt to bring understanding to the complex but specific relationship between the three cornered pathosystem (virus, vector and host plant), as well as their interactions with the environment. A large body of studies mainly focuses on weather based models as management tool for monitoring pests and diseases, with very few incorporating the contribution of vector's life processes in the disease dynamics, which is an essential aspect when mitigating virus incidences in a crop stand. In this study, we hypothesized that the multiplication and spread of tomato spotted wilt virus (TSWV) in a crop stand is strongly related to its influences on Frankliniella occidentalis preferential behavior and life expectancy. Model dynamics of important aspects in disease development within TSWV-F. occidentalis-host plant interactions were developed, focusing on F. occidentalis' life processes as influenced by TSWV. The results show that the influence of TSWV on F. occidentalis preferential behaviour leads to an estimated increase in relative acquisition rate of the virus, and up to 33% increase in transmission rate to healthy plants. Also, increased life expectancy; which relates to improved fitness, is dependent on the virus induced preferential behaviour, consequently promoting multiplication and spread of the virus in a crop stand. The development of vector-based models could further help in elucidating the role of tri-trophic interactions in agricultural disease systems. Use of the model to examine the components of the disease process could also boost our understanding on how specific epidemiological characteristics interact to cause diseases in crops. With this level of understanding we can efficiently develop more precise control strategies for the virus and the vector.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of the present study was to investigate percentage body fat (%BF) differences in three Spanish dance disciplines and to compare skinfold and bioelectrical impedance predictions of body fat percentage in the same sample. Seventy-six female dancers, divided into three groups, Classical (n=23), Spanish (n=29) and Flamenco (n=24), were measured using skinfold measurements at four sites: triceps, subscapular, biceps and iliac crest, and whole body multi-frequency bioelectrical impedance (BIA). The skin-fold measures were used to predict body fat percentage via Durnin and Womersley's and Segal, Sun and Yannakoulia equations by BIA. Differences in percent fat mass between groups (Classical, Spanish and Flamenco) were tested by using repeated measures analysis (ANOVA). Also, Pearson's product-moment correlations were performed on the body fat percentage values obtained using both methods. In addition, Bland-Altman plots were used to assess agreement, between anthropometric and BIA methods. Repeated measures analysis of variance did not found differences in %BF between modalities (p<0.05). Fat percentage correlations ranged from r= 0.57 to r=0.97 (all, p<0.001). Bland-Altman analysis revealed differences between BIA Yannakoulia as a reference method with BIA Segal (-0.35 ± 2.32%, 95%CI: -0.89to 0.18, p=0.38), with BIA Sun (-0.73 ± 2.3%, 95%CI: -1.27 to -0.20, p=0.014) and Durnin-Womersley (-2.65 ± 2,48%, 95%CI: -3.22 to -2.07, p<0.0001). It was concluded that body fat percentage estimates by BIA compared with skinfold method were systematically different in young adult female ballet dancers, having a tendency to produce underestimations as %BF increased with Segal and Durnin-Womersley equations compared to Yannakoulia, concluding that these methods are not interchangeable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cells adapt to their changing world by sensing environmental cues and responding appropriately. This is made possible by complex cascades of biochemical signals that originate at the cell membrane. In the last decade it has become apparent that the origin of these signals can also arise from physical cues in the environment. Our motivation is to investigate the role of physical factors in the cellular response of the B lymphocyte. B cells patrol the body for signs of invading pathogens in the form of antigen on the surface of antigen presenting cells. Binding of antigen with surface proteins initiates biochemical signaling essential to the immune response. Once contact is made, the B cell spreads on the surface of the antigen presenting cell in order to gather as much antigen as possible. The physical mechanisms that govern this process are unexplored. In this research, we examine the role of the physical parameters of antigen mobility and cell surface topography on B cell spreading and activation. Both physical parameters are biologically relevant as immunogens for vaccine design, which can provide laterally mobile and immobile antigens and topographical surfaces. Another physical parameter that influences B cell response and the formation of the cell-cell junction is surface topography. This is biologically relevant as antigen presenting cells have highly convoluted membranes, resulting in variable topography. We found that B cell activation required the formation of antigen-receptor clusters and their translocation within the attachment plane. We showed that cells which failed to achieve these mobile clusters due to prohibited ligand mobility were much less activation competent. To investigate the effect of topography, we use nano- and micro-patterned substrates, on which B cells were allowed to spread and become activated. We found that B cell spreading, actin dynamics, B cell receptor distribution and calcium signaling are dependent on the topographical patterning of the substrate. A quantitative understanding of cellular response to physical parameters is essential to uncover the fundamental mechanisms that drive B cell activation. The results of this research are highly applicable to the field of vaccine development and therapies for autoimmune diseases. Our studies of the physical aspects of lymphocyte activation will reveal the role these factors play in immunity, thus enabling their optimization for biological function and potentially enabling the production of more effective vaccines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We review the use of neural field models for modelling the brain at the large scales necessary for interpreting EEG, fMRI, MEG and optical imaging data. Albeit a framework that is limited to coarse-grained or mean-field activity, neural field models provide a framework for unifying data from different imaging modalities. Starting with a description of neural mass models we build to spatially extended cortical models of layered two-dimensional sheets with long range axonal connections mediating synaptic interactions. Reformulations of the fundamental non-local mathematical model in terms of more familiar local differential (brain wave) equations are described. Techniques for the analysis of such models, including how to determine the onset of spatio-temporal pattern forming instabilities, are reviewed. Extensions of the basic formalism to treat refractoriness, adaptive feedback and inhomogeneous connectivity are described along with open challenges for the development of multi-scale models that can integrate macroscopic models at large spatial scales with models at the microscopic scale.