929 resultados para Single Equation Models
Resumo:
This thesis represents the overview of hydrographic surveying and different types of modern and traditional surveying equipment, and data acquisition using the traditional single beam sonar system and a modern fully autonomous underwater vehicle, IVER3. During the thesis, the data sets were collected using the vehicles of the Great Lake Research Center at Michigan Technological University. This thesis also presents how to process and edit the bathymetric data on SonarWiz5. Moreover, the three dimensional models were created after importing the data sets in the same coordinate system. In these interpolated surfaces, the details and excavations can be easily seen on the surface models. In this study, the profiles are plotted on the surface models to compare the sensors and details on the seabed. It is shown that single beam sonar might miss some details, such as pipeline and quick elevation changes on the seabed when we compare to the side scan sonar of IVER3 because the single side scan sonar can acquire better resolution. However, sometimes using single beam sonar can save your project time and money because the single beam sonar is cheaper than side scan sonars and the processing might be easier than the side scan data.
Resumo:
This tutorial gives a step by step explanation of how one uses experimental data to construct a biologically realistic multicompartmental model. Special emphasis is given on the many ways that this process can be imprecise. The tutorial is intended for both experimentalists who want to get into computer modeling and for computer scientists who use abstract neural network models but are curious about biological realistic modeling. The tutorial is not dependent on the use of a specific simulation engine, but rather covers the kind of data needed for constructing a model, how they are used, and potential pitfalls in the process.
Resumo:
Integrated choice and latent variable (ICLV) models represent a promising new class of models which merge classic choice models with the structural equation approach (SEM) for latent variables. Despite their conceptual appeal, applications of ICLV models in marketing remain rare. We extend previous ICLV applications by first estimating a multinomial choice model and, second, by estimating hierarchical relations between latent variables. An empirical study on travel mode choice clearly demonstrates the value of ICLV models to enhance the understanding of choice processes. In addition to the usually studied directly observable variables such as travel time, we show how abstract motivations such as power and hedonism as well as attitudes such as a desire for flexibility impact on travel mode choice. Furthermore, we show that it is possible to estimate such a complex ICLV model with the widely available structural equation modeling package Mplus. This finding is likely to encourage more widespread application of this appealing model class in the marketing field.
Resumo:
Imitation learning is a promising approach for generating life-like behaviors of virtual humans and humanoid robots. So far, however, imitation learning has been mostly restricted to single agent settings where observed motions are adapted to new environment conditions but not to the dynamic behavior of interaction partners. In this paper, we introduce a new imitation learning approach that is based on the simultaneous motion capture of two human interaction partners. From the observed interactions, low-dimensional motion models are extracted and a mapping between these motion models is learned. This interaction model allows the real-time generation of agent behaviors that are responsive to the body movements of an interaction partner. The interaction model can be applied both to the animation of virtual characters as well as to the behavior generation for humanoid robots.
Resumo:
Peatlands are widely exploited archives of paleoenvironmental change. We developed and compared multiple transfer functions to infer peatland depth to the water table (DWT) and pH based on testate amoeba (percentages, or presence/absence), bryophyte presence/absence, and vascular plant presence/absence data from sub-alpine peatlands in the SE Swiss Alps in order to 1) compare the performance of single-proxy vs. multi-proxy models and 2) assess the performance of presence/absence models. Bootstrapping cross-validation showing the best performing single-proxy transfer functions for both DWT and pH were those based on bryophytes. The best performing transfer functions overall for DWT were those based on combined testate amoebae percentages, bryophytes and vascular plants; and, for pH, those based on testate amoebae and bryophytes. The comparison of DWT and pH inferred from testate amoeba percentages and presence/absence data showed similar general patterns but differences in the magnitude and timing of some shifts. These results show new directions for paleoenvironmental research, 1) suggesting that it is possible to build good-performing transfer functions using presence/absence data, although with some loss of accuracy, and 2) supporting the idea that multi-proxy inference models may improve paleoecological reconstruction. The performance of multi-proxy and single-proxy transfer functions should be further compared in paleoecological data.
Resumo:
Background: Accelerometry has been established as an objective method that can be used to assess physical activity behavior in large groups. The purpose of the current study was to provide a validated equation to translate accelerometer counts of the triaxial GT3X into energy expenditure in young children. Methods: Thirty-two children aged 5–9 years performed locomotor and play activities that are typical for their age group. Children wore a GT3X accelerometer and their energy expenditure was measured with indirect calorimetry. Twenty-one children were randomly selected to serve as development group. A cubic 2-regression model involving separate equations for locomotor and play activities was developed on the basis of model fit. It was then validated using data of the remaining children and compared with a linear 2-regression model and a linear 1-regression model. Results: All 3 regression models produced strong correlations between predicted and measured MET values. Agreement was acceptable for the cubic model and good for both linear regression approaches. Conclusions: The current linear 1-regression model provides valid estimates of energy expenditure for ActiGraph GT3X data for 5- to 9-year-old children and shows equal or better predictive validity than a cubic or a linear 2-regression model.
Resumo:
In the last century, several mathematical models have been developed to calculate blood ethanol concentrations (BAC) from the amount of ingested ethanol and vice versa. The most common one in the field of forensic sciences is Widmark's equation. A drinking experiment with 10 voluntary test persons was performed with a target BAC of 1.2 g/kg estimated using Widmark's equation as well as Watson's factor. The ethanol concentrations in the blood were measured using headspace gas chromatography/flame ionization and additionally with an alcohol Dehydrogenase (ADH)-based method. In a healthy 75-year-old man a distinct discrepancy between the intended and the determined blood ethanol concentration was observed. A blood ethanol concentration of 1.83 g/kg was measured and the man showed signs of intoxication. A possible explanation for the discrepancy is a reduction of the total body water content in older people. The incident showed that caution is advised when using the different mathematical models in aged people. When estimating ethanol concentrations, caution is recommended with calculated results due to potential discrepancies between mathematical models and biological systems
Resumo:
Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes appear to be slightly underestimated. It is possible that recent modelled climate trends or climate–carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2 × and 4 × CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate–carbon feedbacks. The values from EMICs generally fall within the range given by general circulation models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows a non-linear interaction between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there is a tendency for the EMICs to underestimate the drop in surface air temperature and CO2 between the Medieval Climate Anomaly and the Little Ice Age estimated from palaeoclimate reconstructions. This in turn could be a result of unforced variability within the climate system, uncertainty in the reconstructions of temperature and CO2, errors in the reconstructions of forcing used to drive the models, or the incomplete representation of certain processes within the models. Given the forcing datasets used in this study, the models calculate significant land-use emissions over the pre-industrial period. This implies that land-use emissions might need to be taken into account, when making estimates of climate–carbon feedbacks from palaeoclimate reconstructions.
Resumo:
Calmodulin (CaM) is a ubiquitous Ca(2+) buffer and second messenger that affects cellular function as diverse as cardiac excitability, synaptic plasticity, and gene transcription. In CA1 pyramidal neurons, CaM regulates two opposing Ca(2+)-dependent processes that underlie memory formation: long-term potentiation (LTP) and long-term depression (LTD). Induction of LTP and LTD require activation of Ca(2+)-CaM-dependent enzymes: Ca(2+)/CaM-dependent kinase II (CaMKII) and calcineurin, respectively. Yet, it remains unclear as to how Ca(2+) and CaM produce these two opposing effects, LTP and LTD. CaM binds 4 Ca(2+) ions: two in its N-terminal lobe and two in its C-terminal lobe. Experimental studies have shown that the N- and C-terminal lobes of CaM have different binding kinetics toward Ca(2+) and its downstream targets. This may suggest that each lobe of CaM differentially responds to Ca(2+) signal patterns. Here, we use a novel event-driven particle-based Monte Carlo simulation and statistical point pattern analysis to explore the spatial and temporal dynamics of lobe-specific Ca(2+)-CaM interaction at the single molecule level. We show that the N-lobe of CaM, but not the C-lobe, exhibits a nano-scale domain of activation that is highly sensitive to the location of Ca(2+) channels, and to the microscopic injection rate of Ca(2+) ions. We also demonstrate that Ca(2+) saturation takes place via two different pathways depending on the Ca(2+) injection rate, one dominated by the N-terminal lobe, and the other one by the C-terminal lobe. Taken together, these results suggest that the two lobes of CaM function as distinct Ca(2+) sensors that can differentially transduce Ca(2+) influx to downstream targets. We discuss a possible role of the N-terminal lobe-specific Ca(2+)-CaM nano-domain in CaMKII activation required for the induction of synaptic plasticity.
Resumo:
This study analyses the impact on the oceanic mean state of the evolution of the oceanic component (NEMO) of the climate model developed at Institut Pierre Simon Laplace (IPSL-CM), from the version IPSL-CM4, used for third phase of the Coupled Model Intercomparison Project (CMIP3), to IPSL-CM5A, used for CMIP5. Several modifications have been implemented between these two versions, in particular an interactive coupling with a biogeochemical module, a 3-band model for the penetration of the solar radiation, partial steps at the bottom of the ocean and a set of physical parameterisations to improve the representation of the impact of turbulent and tidal mixing. A set of forced and coupled experiments is used to single out the effect of each of these modifications and more generally the evolution of the oceanic component on the IPSL coupled models family. Major improvements are located in the Southern Ocean, where physical parameterisations such as partial steps and tidal mixing reinforce the barotropic transport of water mass, in particular in the Antarctic Circumpolar Current) and ensure a better representation of Antarctic bottom water masses. However, our analysis highlights that modifications, which substantially improve ocean dynamics in forced configuration, can yield or amplify biases in coupled configuration. In particular, the activation of radiative biophysical coupling between biogeochemical cycle and ocean dynamics results in a cooling of the ocean mean state. This illustrates the difficulty to improve and tune coupled climate models, given the large number of degrees of freedom and the potential compensating effects masking some biases.
Resumo:
Radiotherapy involving the thoracic cavity and chemotherapy with the drug bleomycin are both dose limited by the development of pulmonary fibrosis. From evidence that there is variation in the population in susceptibility to pulmonary fibrosis, and animal data, it was hypothesized that individual variation in susceptibility to bleomycin-induced, or radiation-induced, pulmonary fibrosis is, in part, genetically controlled. In this thesis a three generation mouse genetic model of C57BL/6J (fibrosis prone) and C3Hf/Kam (fibrosis resistant) mouse strains and F1 and F2 (F1 intercross) progeny derived from the parental strains was developed to investigate the genetic basis of susceptibility to fibrosis. In the bleomycin studies the mice received 100 mg/kg (125 for females) of bleomycin, via mini osmotic pump. The animals were sacrificed at eight weeks following treatment or when their breathing rate indicated respiratory distress. In the radiation studies the mice were given a single dose of 14 or 16 Gy (Co$\sp{60})$ to the whole thorax and were sacrificed when moribund. The phenotype was defined as the percent of fibrosis area in the left lung as quantified with image analysis of histological sections. Quantitative trait loci (QTL) mapping was used to identify the chromosomal location of genes which contribute to susceptibility to bleomycin-induced pulmonary fibrosis in C57BL/6J mice compared to C3Hf/Kam mice and to determine if the QTL's which influence susceptibility to bleomycin-induced lung fibrosis in these progenitor strains could be implicated in susceptibility to radiation-induced lung fibrosis. For bleomycin, a genome wide scan revealed QTL's on chromosome 17, at the MHC, (LOD = 11.7 for males and 7.2 for females) accounting for approximately 21% of the phenotypic variance, and on chromosome 11 (LOD = 4.9), in male mice only, adding 8% of phenotypic variance. The bleomycin QTL on chromosome 17 was also implicated for susceptibility to radiation-induced fibrosis (LOD = 5.0) and contributes 7% of the phenotypic variance in the radiation study. In conclusion, susceptibility to both bleomycin-induced and radiation-induced pulmonary fibrosis are heritable traits, and are influenced by a genetic factor which maps to a genomic region containing the MHC. ^
Resumo:
The factorial validity of the SF-36 was evaluated using confirmatory factor analysis (CFA) methods, structural equation modeling (SEM), and multigroup structural equation modeling (MSEM). First, the measurement and structural model of the hypothesized SF-36 was explicated. Second, the model was tested for the validity of a second-order factorial structure, upon evidence of model misfit, determined the best-fitting model, and tested the validity of the best-fitting model on a second random sample from the same population. Third, the best-fitting model was tested for invariance of the factorial structure across race, age, and educational subgroups using MSEM.^ The findings support the second-order factorial structure of the SF-36 as proposed by Ware and Sherbourne (1992). However, the results suggest that: (a) Mental Health and Physical Health covary; (b) general mental health cross-loads onto Physical Health; (c) general health perception loads onto Mental Health instead of Physical Health; (d) many of the error terms are correlated; and (e) the physical function scale is not reliable across these two samples. This hierarchical factor pattern was replicated across both samples of health care workers, suggesting that the post hoc model fitting was not data specific. Subgroup analysis suggests that the physical function scale is not reliable across the "age" or "education" subgroups and that the general mental health scale path from Mental Health is not reliable across the "white/nonwhite" or "education" subgroups.^ The importance of this study is in the use of SEM and MSEM in evaluating sample data from the use of the SF-36. These methods are uniquely suited to the analysis of latent variable structures and are widely used in other fields. The use of latent variable models for self reported outcome measures has become widespread, and should now be applied to medical outcomes research. Invariance testing is superior to mean scores or summary scores when evaluating differences between groups. From a practical, as well as, psychometric perspective, it seems imperative that construct validity research related to the SF-36 establish whether this same hierarchical structure and invariance holds for other populations.^ This project is presented as three articles to be submitted for publication. ^
Resumo:
Application of biogeochemical models to the study of marine ecosystems is pervasive, yet objective quantification of these models' performance is rare. Here, 12 lower trophic level models of varying complexity are objectively assessed in two distinct regions (equatorial Pacific and Arabian Sea). Each model was run within an identical one-dimensional physical framework. A consistent variational adjoint implementation assimilating chlorophyll-a, nitrate, export, and primary productivity was applied and the same metrics were used to assess model skill. Experiments were performed in which data were assimilated from each site individually and from both sites simultaneously. A cross-validation experiment was also conducted whereby data were assimilated from one site and the resulting optimal parameters were used to generate a simulation for the second site. When a single pelagic regime is considered, the simplest models fit the data as well as those with multiple phytoplankton functional groups. However, those with multiple phytoplankton functional groups produced lower misfits when the models are required to simulate both regimes using identical parameter values. The cross-validation experiments revealed that as long as only a few key biogeochemical parameters were optimized, the models with greater phytoplankton complexity were generally more portable. Furthermore, models with multiple zooplankton compartments did not necessarily outperform models with single zooplankton compartments, even when zooplankton biomass data are assimilated. Finally, even when different models produced similar least squares model-data misfits, they often did so via very different element flow pathways, highlighting the need for more comprehensive data sets that uniquely constrain these pathways.
An Early-Warning System for Hypo-/Hyperglycemic Events Based on Fusion of Adaptive Prediction Models
Resumo:
Introduction: Early warning of future hypoglycemic and hyperglycemic events can improve the safety of type 1 diabetes mellitus (T1DM) patients. The aim of this study is to design and evaluate a hypoglycemia / hyperglycemia early warning system (EWS) for T1DM patients under sensor-augmented pump (SAP) therapy. Methods: The EWS is based on the combination of data-driven online adaptive prediction models and a warning algorithm. Three modeling approaches have been investigated: (i) autoregressive (ARX) models, (ii) auto-regressive with an output correction module (cARX) models, and (iii) recurrent neural network (RNN) models. The warning algorithm performs postprocessing of the models′ outputs and issues alerts if upcoming hypoglycemic/hyperglycemic events are detected. Fusion of the cARX and RNN models, due to their complementary prediction performances, resulted in the hybrid autoregressive with an output correction module/recurrent neural network (cARN)-based EWS. Results: The EWS was evaluated on 23 T1DM patients under SAP therapy. The ARX-based system achieved hypoglycemic (hyperglycemic) event prediction with median values of accuracy of 100.0% (100.0%), detection time of 10.0 (8.0) min, and daily false alarms of 0.7 (0.5). The respective values for the cARX-based system were 100.0% (100.0%), 17.5 (14.8) min, and 1.5 (1.3) and, for the RNN-based system, were 100.0% (92.0%), 8.4 (7.0) min, and 0.1 (0.2). The hybrid cARN-based EWS presented outperforming results with 100.0% (100.0%) prediction accuracy, detection 16.7 (14.7) min in advance, and 0.8 (0.8) daily false alarms. Conclusion: Combined use of cARX and RNN models for the development of an EWS outperformed the single use of each model, achieving accurate and prompt event prediction with few false alarms, thus providing increased safety and comfort.
Resumo:
We present a comprehensive analytical study of radiative transfer using the method of moments and include the effects of non-isotropic scattering in the coherent limit. Within this unified formalism, we derive the governing equations and solutions describing two-stream radiative transfer (which approximates the passage of radiation as a pair of outgoing and incoming fluxes), flux-limited diffusion (which describes radiative transfer in the deep interior) and solutions for the temperature-pressure profiles. Generally, the problem is mathematically under-determined unless a set of closures (Eddington coefficients) is specified. We demonstrate that the hemispheric (or hemi-isotropic) closure naturally derives from the radiative transfer equation if energy conservation is obeyed, while the Eddington closure produces spurious enhancements of both reflected light and thermal emission. We concoct recipes for implementing two-stream radiative transfer in stand-alone numerical calculations and general circulation models. We use our two-stream solutions to construct toy models of the runaway greenhouse effect. We present a new solution for temperature-pressure profiles with a non-constant optical opacity and elucidate the effects of non-isotropic scattering in the optical and infrared. We derive generalized expressions for the spherical and Bond albedos and the photon deposition depth. We demonstrate that the value of the optical depth corresponding to the photosphere is not always 2/3 (Milne's solution) and depends on a combination of stellar irradiation, internal heat and the properties of scattering both in optical and infrared. Finally, we derive generalized expressions for the total, net, outgoing and incoming fluxes in the convective regime.