998 resultados para Anatomical models
Resumo:
This paper presents the results of shaking table tests on models of rigid-faced reinforced soil retaining walls in which reinforcement materials of different tensile strength were used. The construction of the model retaining walls in a laminar box mounted on a shaking table, the instrumentation and the results from the shaking table tests are described in detail and the effects of the reinforcement parameters on the acceleration response at different elevations of the retaining wall, horizontal soil pressures and face deformations are presented. It was observed from these tests that the horizontal face displacement response of the rigid-faced retaining walls was significantly affected by the inclusion of reinforcement and even low-strength polymer reinforcement was found to be efficient in significantly reducing the deformation of the face. The acceleration amplifications were, however, observed to be less influenced by the reinforcement parameters. The results obtained from this study are helpful in understanding the relative performance of reinforced soil retaining walls under the different test conditions used in the experiments.
Resumo:
Regular electrical activation waves in cardiac tissue lead to the rhythmic contraction and expansion of the heart that ensures blood supply to the whole body. Irregularities in the propagation of these activation waves can result in cardiac arrhythmias, like ventricular tachycardia (VT) and ventricular fibrillation (VF), which are major causes of death in the industrialised world. Indeed there is growing consensus that spiral or scroll waves of electrical activation in cardiac tissue are associated with VT, whereas, when these waves break to yield spiral- or scroll-wave turbulence, VT develops into life-threatening VF: in the absence of medical intervention, this makes the heart incapable of pumping blood and a patient dies in roughly two-and-a-half minutes after the initiation of VF. Thus studies of spiral- and scroll-wave dynamics in cardiac tissue pose important challenges for in vivo and in vitro experimental studies and for in silico numerical studies of mathematical models for cardiac tissue. A major goal here is to develop low-amplitude defibrillation schemes for the elimination of VT and VF, especially in the presence of inhomogeneities that occur commonly in cardiac tissue. We present a detailed and systematic study of spiral- and scroll-wave turbulence and spatiotemporal chaos in four mathematical models for cardiac tissue, namely, the Panfilov, Luo-Rudy phase 1 (LRI), reduced Priebe-Beuckelmann (RPB) models, and the model of ten Tusscher, Noble, Noble, and Panfilov (TNNP). In particular, we use extensive numerical simulations to elucidate the interaction of spiral and scroll waves in these models with conduction and ionic inhomogeneities; we also examine the suppression of spiral- and scroll-wave turbulence by low-amplitude control pulses. Our central qualitative result is that, in all these models, the dynamics of such spiral waves depends very sensitively on such inhomogeneities. We also study two types of control chemes that have been suggested for the control of spiral turbulence, via low amplitude current pulses, in such mathematical models for cardiac tissue; our investigations here are designed to examine the efficacy of such control schemes in the presence of inhomogeneities. We find that a local pulsing scheme does not suppress spiral turbulence in the presence of inhomogeneities; but a scheme that uses control pulses on a spatially extended mesh is more successful in the elimination of spiral turbulence. We discuss the theoretical and experimental implications of our study that have a direct bearing on defibrillation, the control of life-threatening cardiac arrhythmias such as ventricular fibrillation.
Resumo:
Exposure to water-damaged buildings and the associated health problems have evoked concern and created confusion during the past 20 years. Individuals exposed to moisture problem buildings report adverse health effects such as non-specific respiratory symptoms. Microbes, especially fungi, growing on the damp material have been considered as potential sources of the health problems encountered in these buildings. Fungi and their airborne fungal spores contain allergens and secondary metabolites which may trigger allergic as well as inflammatory types of responses in the eyes and airways. Although epidemiological studies have revealed an association between damp buildings and health problems, no direct cause-and-effect relationship has been established. Further knowledge is needed about the epidemiology and the mechanisms leading to the symptoms associated with exposure to fungi. Two different approaches have been used in this thesis in order to investigate the diverse health effects associated with exposure to moulds. In the first part, sensitization to moulds was evaluated and potential cross-reactivity studied in patients attending a hospital for suspected allergy. In the second part, one typical mould known to be found in water-damaged buildings and to produce toxic secondary metabolites was used to study the airway responses in an experimental model. Exposure studies were performed on both naive and allergen sensitized mice. The first part of the study showed that mould allergy is rare and highly dependent on the atopic status of the examined individual. The prevalence of sensitization was 2.7% to Cladosporium herbarum and 2.8% to Alternaria alternata in patients, the majority of whom were atopic subjects. Some of the patients sensitized to mould suffered from atopic eczema. Frequently the patients were observed to possess specific serum IgE antibodies to a yeast present in the normal skin flora, Pityrosporum ovale. In some of these patients, the IgE binding was partly found to be due to binding to shared glycoproteins in the mould and yeast allergen extracts. The second part of the study revealed that exposure to Stachybotrys chartarum spores induced an airway inflammation in the lungs of mice. The inflammation was characterized by an influx of inflammatory cells, mainly neutrophils and lymphocytes, into the lungs but with almost no differences in airway responses seen between the satratoxin producing and non-satratoxin producing strain. On the other hand, when mice were exposed to S. chartarum and sensitized/challenged with ovalbumin the extent of the inflammation was markedly enhanced. A synergistic increase in the numbers of inflammatory cells was seen in BAL and severe inflammation was observed in the histological lung sections. In conclusion, the results in this thesis imply that exposure to moulds in water damaged buildings may trigger health effects in susceptible individuals. The symptoms can rarely be explained by IgE mediated allergy to moulds. Other non-allergic mechanisms seem to be involved. Stachybotrys chartarum is one of the moulds potentially responsible for health problems. In this thesis, new reaction models for the airway inflammation induced by S. chartarum have been found using experimental approaches. The immunological status played an important role in the airway inflammation, enhancing the effects of mould exposure. The results imply that sensitized individuals may be more susceptible to exposure to moulds than non-sensitized individuals.
Resumo:
MEG directly measures the neuronal events and has greater temporal resolution than fMRI, which has limited temporal resolution mainly due to the larger timescale of the hemodynamic response. On the other hand fMRI has advantages in spatial resolution, while the localization results with MEG can be ambiguous due to the non-uniqueness of the electromagnetic inverse problem. Thus, these methods could provide complementary information and could be used to create both spatially and temporally accurate models of brain function. We investigated the degree of overlap, revealed by the two imaging methods, in areas involved in sensory or motor processing in healthy subjects and neurosurgical patients. Furthermore, we used the spatial information from fMRI to construct a spatiotemporal model of the MEG data in order to investigate the sensorimotor system and to create a spatiotemporal model of its function. We compared the localization results from the MEG and fMRI with invasive electrophysiological cortical mapping. We used a recently introduced method, contextual clustering, for hypothesis testing of fMRI data and assessed the the effect of neighbourhood information use on the reproducibility of fMRI results. Using MEG, we identified the ipsilateral primary sensorimotor cortex (SMI) as a novel source area contributing to the somatosensory evoked fields (SEF) to median nerve stimulation. Using combined MEG and fMRI measurements we found that two separate areas in the lateral fissure may be the generators for the SEF responses from the secondary somatosensory cortex region. The two imaging methods indicated activation in corresponding locations. By using complementary information from MEG and fMRI we established a spatiotemporal model of somatosensory cortical processing. This spatiotemporal model of cerebral activity was in good agreement with results from several studies using invasive electrophysiological measurements and with anatomical studies in monkey and man concerning the connections between somatosensory areas. In neurosurgical patients, the MEG dipole model turned out to be more reliable than fMRI in the identification of the central sulcus. This was due to prominent activation in non-primary areas in fMRI, which in some cases led to erroneous or ambiguous localization of the central sulcus.
Resumo:
Despite positive testing in animal studies, more than 80% of novel drug candidates fail to proof their efficacy when tested in humans. This is primarily due to the use of preclinical models that are not able to recapitulate the physiological or pathological processes in humans. Hence, one of the key challenges in the field of translational medicine is to “make the model organism mouse more human.” To get answers to questions that would be prognostic of outcomes in human medicine, the mouse's genome can be altered in order to create a more permissive host that allows the engraftment of human cell systems. It has been shown in the past that these strategies can improve our understanding of tumor immunology. However, the translational benefits of these platforms have still to be proven. In the 21st century, several research groups and consortia around the world take up the challenge to improve our understanding of how to humanize the animal's genetic code, its cells and, based on tissue engineering principles, its extracellular microenvironment, its tissues, or entire organs with the ultimate goal to foster the translation of new therapeutic strategies from bench to bedside. This article provides an overview of the state of the art of humanized models of tumor immunology and highlights future developments in the field such as the application of tissue engineering and regenerative medicine strategies to further enhance humanized murine model systems.
Multi-GNSS precise point positioning with raw single-frequency and dual-frequency measurement models
Resumo:
The emergence of multiple satellite navigation systems, including BDS, Galileo, modernized GPS, and GLONASS, brings great opportunities and challenges for precise point positioning (PPP). We study the contributions of various GNSS combinations to PPP performance based on undifferenced or raw observations, in which the signal delays and ionospheric delays must be considered. A priori ionospheric knowledge, such as regional or global corrections, strengthens the estimation of ionospheric delay parameters. The undifferenced models are generally more suitable for single-, dual-, or multi-frequency data processing for single or combined GNSS constellations. Another advantage over ionospheric-free PPP models is that undifferenced models avoid noise amplification by linear combinations. Extensive performance evaluations are conducted with multi-GNSS data sets collected from 105 MGEX stations in July 2014. Dual-frequency PPP results from each single constellation show that the convergence time of undifferenced PPP solution is usually shorter than that of ionospheric-free PPP solutions, while the positioning accuracy of undifferenced PPP shows more improvement for the GLONASS system. In addition, the GLONASS undifferenced PPP results demonstrate performance advantages in high latitude areas, while this impact is less obvious in the GPS/GLONASS combined configuration. The results have also indicated that the BDS GEO satellites have negative impacts on the undifferenced PPP performance given the current “poor” orbit and clock knowledge of GEO satellites. More generally, the multi-GNSS undifferenced PPP results have shown improvements in the convergence time by more than 60 % in both the single- and dual-frequency PPP results, while the positioning accuracy after convergence indicates no significant improvements for the dual-frequency PPP solutions, but an improvement of about 25 % on average for the single-frequency PPP solutions.
Resumo:
Tactile sensation plays an important role in everyday life. While the somatosensory system has been studied extensively, the majority of information has come from studies using animal models. Recent development of high-resolution anatomical and functional imaging techniques has enabled the non-invasive study of human somatosensory cortex and thalamus. This thesis provides new insights into the functional organization of the human brain areas involved in tactile processing using magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI). The thesis also demonstrates certain optimizations of MEG and fMRI methods. Tactile digit stimulation elicited stimulus-specific responses in a number of brain areas. Contralateral activation was observed in somatosensory thalamus (Study II), primary somatosensory cortex (SI; I, III, IV), and post-auditory belt area (III). Bilateral activation was observed in secondary somatosensory cortex (SII; II, III, IV). Ipsilateral activation was found in the post-central gyrus (area 2 of SI cortex; IV). In addition, phasic deactivation was observed within ipsilateral SI cortex and bilateral primary motor cortex (IV). Detailed investigation of the tactile responses demonstrated that the arrangement of distal-proximal finger representations in area 3b of SI in humans is similar to that found in monkeys (I). An optimized MEG approach was sufficient to resolve such fine detail in functional organization. The SII region appeared to contain double representations for fingers and toes (II). The detection of activations in the SII region and thalamus improved at the individual and group levels when cardiac-gated fMRI was used (II). Better detection of body part representations at the individual level is an important improvement, because identification of individual representations is crucial for studying brain plasticity in somatosensory areas. The posterior auditory belt area demonstrated responses to both auditory and tactile stimuli (III), implicating this area as a physiological substrate for the auditory-tactile interaction observed in earlier psychophysical studies. Comparison of different smoothing parameters (III) demonstrated that proper evaluation of co-activation should be based on individual subject analysis with minimal or no smoothing. Tactile input consistently influenced area 3b of the human ipsilateral SI cortex (IV). The observed phasic negative fMRI response is proposed to result from interhemispheric inhibition via trans-callosal connections. This thesis contributes to a growing body of human data suggesting that processing of tactile stimuli involves multiple brain areas, with different spatial patterns of cortical activation for different stimuli.
Resumo:
Anatomical brain networks change throughout life and with diseases. Genetic analysis of these networks may help identify processes giving rise to heritable brain disorders, but we do not yet know which network measures are promising for genetic analyses. Many factors affect the downstream results, such as the tractography algorithm used to define structural connectivity. We tested nine different tractography algorithms and four normalization methods to compute brain networks for 853 young healthy adults (twins and their siblings). We fitted genetic structural equation models to all nine network measures, after a normalization step to increase network consistency across tractography algorithms. Probabilistic tractography algorithms with global optimization (such as Probtrackx and Hough) yielded higher heritability statistics than 'greedy' algorithms (such as FACT) which process small neighborhoods at each step. Some global network measures (probtrackx-derived GLOB and ST) showed significant genetic effects, making them attractive targets for genome-wide association studies.
Genetic analysis of structural brain connectivity using DICCCOL models of diffusion MRI in 522 twins
Resumo:
Genetic and environmental factors affect white matter connectivity in the normal brain, and they also influence diseases in which brain connectivity is altered. Little is known about genetic influences on brain connectivity, despite wide variations in the brain's neural pathways. Here we applied the 'DICCCOL' framework to analyze structural connectivity, in 261 twin pairs (522 participants, mean age: 21.8 y ± 2.7SD). We encoded connectivity patterns by projecting the white matter (WM) bundles of all 'DICCCOLs' as a tracemap (TM). Next we fitted an A/C/E structural equation model to estimate additive genetic (A), common environmental (C), and unique environmental/error (E) components of the observed variations in brain connectivity. We found 44 'heritable DICCCOLs' whose connectivity was genetically influenced (α2>1%); half of them showed significant heritability (α2>20%). Our analysis of genetic influences on WM structural connectivity suggests high heritability for some WM projection patterns, yielding new targets for genome-wide association studies.
Resumo:
In the world today there are many ways in which we measure, count and determine whether something is worth the effort or not. In Australia and many other countries, new government legislation is requiring government-funded entities to become more transparent in their practice and to develop a more cohesive narrative about the worth, or impact, for the betterment of society. This places the executives of such entities in a position of needing evaluative thinking and practice to guide how they may build the narrative that documents and demonstrates this type of impact. In thinking about where to start, executives, project and program managers may consider this workshop as a professional development opportunity to explore both the intended and unintended consequences of performance models as tools of evaluation. This workshop will offer participants an opportunity to unpack the place of performance models as an evaluative tool through the following: · What shape does an ethical, sound and valid performance measure for an organization or personnel take? · What role does cultural specificity play in the design and development of a performance model for an organization or for personnel? · How are stakeholders able to identify risk during the design and development of such models? · When and where will dissemination strategies be required? · And so what? How can you determine that your performance model implementation has made a difference now or in the future?
Resumo:
We provide analytical models for capacity evaluation of an infrastructure IEEE 802.11 based network carrying TCP controlled file downloads or full-duplex packet telephone calls. In each case the analytical models utilize the attempt probabilities from a well known fixed-point based saturation analysis. For TCP controlled file downloads, following Bruno et al. (In Networking '04, LNCS 2042, pp. 626-637), we model the number of wireless stations (STAs) with ACKs as a Markov renewal process embedded at packet success instants. In our work, analysis of the evolution between the embedded instants is done by using saturation analysis to provide state dependent attempt probabilities. We show that in spite of its simplicity, our model works well, by comparing various simulated quantities, such as collision probability, with values predicted from our model. Next we consider N constant bit rate VoIP calls terminating at N STAs. We model the number of STAs that have an up-link voice packet as a Markov renewal process embedded at so called channel slot boundaries. Analysis of the evolution over a channel slot is done using saturation analysis as before. We find that again the AP is the bottleneck, and the system can support (in the sense of a bound on the probability of delay exceeding a given value) a number of calls less than that at which the arrival rate into the AP exceeds the average service rate applied to the AP. Finally, we extend the analytical model for VoIP calls to determine the call capacity of an 802.11b WLAN in a situation where VoIP calls originate from two different types of coders. We consider N-1 calls originating from Type 1 codecs and N-2 calls originating from Type 2 codecs. For G711 and G729 voice coders, we show that the analytical model again provides accurate results in comparison with simulations.
Resumo:
The electrical conduction in insulating materials is a complex process and several theories have been suggested in the literature. Many phenomenological empirical models are in use in the DC cable literature. However, the impact of using different models for cable insulation has not been investigated until now, but for the claims of relative accuracy. The steady state electric field in the DC cable insulation is known to be a strong function of DC conductivity. The DC conductivity, in turn, is a complex function of electric field and temperature. As a result, under certain conditions, the stress at cable screen is higher than that at the conductor boundary. The paper presents detailed investigations on using different empirical conductivity models suggested in the literature for HV DC cable applications. It has been expressly shown that certain models give rise to erroneous results in electric field and temperature computations. It is pointed out that the use of these models in the design or evaluation of cables will lead to errors.
Resumo:
Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.