926 resultados para MODELING APPROACH


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Integrated choice and latent variable (ICLV) models represent a promising new class of models which merge classic choice models with the structural equation approach (SEM) for latent variables. Despite their conceptual appeal, applications of ICLV models in marketing remain rare. We extend previous ICLV applications by first estimating a multinomial choice model and, second, by estimating hierarchical relations between latent variables. An empirical study on travel mode choice clearly demonstrates the value of ICLV models to enhance the understanding of choice processes. In addition to the usually studied directly observable variables such as travel time, we show how abstract motivations such as power and hedonism as well as attitudes such as a desire for flexibility impact on travel mode choice. Furthermore, we show that it is possible to estimate such a complex ICLV model with the widely available structural equation modeling package Mplus. This finding is likely to encourage more widespread application of this appealing model class in the marketing field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mixed Reality (MR) aims to link virtual entities with the real world and has many applications such as military and medical domains [JBL+00, NFB07]. In many MR systems and more precisely in augmented scenes, one needs the application to render the virtual part accurately at the right time. To achieve this, such systems acquire data related to the real world from a set of sensors before rendering virtual entities. A suitable system architecture should minimize the delays to keep the overall system delay (also called end-to-end latency) within the requirements for real-time performance. In this context, we propose a compositional modeling framework for MR software architectures in order to specify, simulate and validate formally the time constraints of such systems. Our approach is first based on a functional decomposition of such systems into generic components. The obtained elements as well as their typical interactions give rise to generic representations in terms of timed automata. A whole system is then obtained as a composition of such defined components. To write specifications, a textual language named MIRELA (MIxed REality LAnguage) is proposed along with the corresponding compilation tools. The generated output contains timed automata in UPPAAL format for simulation and verification of time constraints. These automata may also be used to generate source code skeletons for an implementation on a MR platform. The approach is illustrated first on a small example. A realistic case study is also developed. It is modeled by several timed automata synchronizing through channels and including a large number of time constraints. Both systems have been simulated in UPPAAL and checked against the required behavioral properties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In terms of atmospheric impact, the volcanic eruption of Mt. Pinatubo (1991) is the best characterized large eruption on record. We investigate here the model-derived stratospheric warming following the Pinatubo eruption as derived from SAGE II extinction data including recent improvements in the processing algorithm. This method, termed SAGE_4λ, makes use of the four wavelengths (385, 452, 525 and 1024 nm) of the SAGE II data when available, and uses a data-filling procedure in the opacity-induced "gap" regions. Using SAGE_4λ, we derived aerosol size distributions that properly reproduce extinction coefficients also at much longer wavelengths. This provides a good basis for calculating the absorption of terrestrial infrared radiation and the resulting stratospheric heating. However, we also show that the use of this data set in a global chemistry–climate model (CCM) still leads to stronger aerosol-induced stratospheric heating than observed, with temperatures partly even higher than the already too high values found by many models in recent general circulation model (GCM) and CCM intercomparisons. This suggests that the overestimation of the stratospheric warming after the Pinatubo eruption may not be ascribed to an insufficient observational database but instead to using outdated data sets, to deficiencies in the implementation of the forcing data, or to radiative or dynamical model artifacts. Conversely, the SAGE_4λ approach reduces the infrared absorption in the tropical tropopause region, resulting in a significantly better agreement with the post-volcanic temperature record at these altitudes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A general introduction to the state of the art in modeling metal organic materials using transferable atomic multipoles is provided. The method is based on the building block partitioning of the electron density, which is illustrated with some examples of potential applications and with detailed discussions of the advantages and pitfalls. The interactions taking place between building blocks are summarized and are used to discuss the properties that can be calculated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the last ~20 years, soil spectral libraries storing near-infrared reflectance (NIR) spectra from diverse soil samples have been built for many places, since almost 10 years also for Tajikistan. Many calibration approaches have been reported and used for prediction from large and heterogeneous libraries, but most are hampered by the high diversity of the soils, where the mineral background is heavily influencing spectral features. In such cases, local learning strategies have the advantage of building locally adapted calibrations, which can deal better with nonlinearities. Therefore, it was our major aim to identify the most efficient approach to develop an accurate and stable locally weigthed calibration model using a spectral library compiled over the past years. Keywords: Tajikistan, Near-Infrared spectroscopy (NIRS), soil organic carbon, locally weighted regression, regional and local spectral library.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to analyze software systems, it is necessary to model them. Static software models are commonly imported by parsing source code and related data. Unfortunately, building custom parsers for most programming languages is a non-trivial endeavour. This poses a major bottleneck for analyzing software systems programmed in languages for which importers do not already exist. Luckily, initial software models do not require detailed parsers, so it is possible to start analysis with a coarse-grained importer, which is then gradually refined. In this paper we propose an approach to "agile modeling" that exploits island grammars to extract initial coarse-grained models, parser combinators to enable gradual refinement of model importers, and various heuristics to recognize language structure, keywords and other language artifacts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The considerable search for synergistic agents in cancer research is motivated by the therapeutic benefits achieved by combining anti-cancer agents. Synergistic agents make it possible to reduce dosage while maintaining or enhancing a desired effect. Other favorable outcomes of synergistic agents include reduction in toxicity and minimizing or delaying drug resistance. Dose-response assessment and drug-drug interaction analysis play an important part in the drug discovery process, however analysis are often poorly done. This dissertation is an effort to notably improve dose-response assessment and drug-drug interaction analysis. The most commonly used method in published analysis is the Median-Effect Principle/Combination Index method (Chou and Talalay, 1984). The Median-Effect Principle/Combination Index method leads to inefficiency by ignoring important sources of variation inherent in dose-response data and discarding data points that do not fit the Median-Effect Principle. Previous work has shown that the conventional method yields a high rate of false positives (Boik, Boik, Newman, 2008; Hennessey, Rosner, Bast, Chen, 2010) and, in some cases, low power to detect synergy. There is a great need for improving the current methodology. We developed a Bayesian framework for dose-response modeling and drug-drug interaction analysis. First, we developed a hierarchical meta-regression dose-response model that accounts for various sources of variation and uncertainty and allows one to incorporate knowledge from prior studies into the current analysis, thus offering a more efficient and reliable inference. Second, in the case that parametric dose-response models do not fit the data, we developed a practical and flexible nonparametric regression method for meta-analysis of independently repeated dose-response experiments. Third, and lastly, we developed a method, based on Loewe additivity that allows one to quantitatively assess interaction between two agents combined at a fixed dose ratio. The proposed method makes a comprehensive and honest account of uncertainty within drug interaction assessment. Extensive simulation studies show that the novel methodology improves the screening process of effective/synergistic agents and reduces the incidence of type I error. We consider an ovarian cancer cell line study that investigates the combined effect of DNA methylation inhibitors and histone deacetylation inhibitors in human ovarian cancer cell lines. The hypothesis is that the combination of DNA methylation inhibitors and histone deacetylation inhibitors will enhance antiproliferative activity in human ovarian cancer cell lines compared to treatment with each inhibitor alone. By applying the proposed Bayesian methodology, in vitro synergy was declared for DNA methylation inhibitor, 5-AZA-2'-deoxycytidine combined with one histone deacetylation inhibitor, suberoylanilide hydroxamic acid or trichostatin A in the cell lines HEY and SKOV3. This suggests potential new epigenetic therapies in cell growth inhibition of ovarian cancer cells.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Empirical evidence and theoretical studies suggest that the phenotype, i.e., cellular- and molecular-scale dynamics, including proliferation rate and adhesiveness due to microenvironmental factors and gene expression that govern tumor growth and invasiveness, also determine gross tumor-scale morphology. It has been difficult to quantify the relative effect of these links on disease progression and prognosis using conventional clinical and experimental methods and observables. As a result, successful individualized treatment of highly malignant and invasive cancers, such as glioblastoma, via surgical resection and chemotherapy cannot be offered and outcomes are generally poor. What is needed is a deterministic, quantifiable method to enable understanding of the connections between phenotype and tumor morphology. Here, we critically assess advantages and disadvantages of recent computational modeling efforts (e.g., continuum, discrete, and cellular automata models) that have pursued this understanding. Based on this assessment, we review a multiscale, i.e., from the molecular to the gross tumor scale, mathematical and computational "first-principle" approach based on mass conservation and other physical laws, such as employed in reaction-diffusion systems. Model variables describe known characteristics of tumor behavior, and parameters and functional relationships across scales are informed from in vitro, in vivo and ex vivo biology. We review the feasibility of this methodology that, once coupled to tumor imaging and tumor biopsy or cell culture data, should enable prediction of tumor growth and therapy outcome through quantification of the relation between the underlying dynamics and morphological characteristics. In particular, morphologic stability analysis of this mathematical model reveals that tumor cell patterning at the tumor-host interface is regulated by cell proliferation, adhesion and other phenotypic characteristics: histopathology information of tumor boundary can be inputted to the mathematical model and used as a phenotype-diagnostic tool to predict collective and individual tumor cell invasion of surrounding tissue. This approach further provides a means to deterministically test effects of novel and hypothetical therapy strategies on tumor behavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the understanding and representation of the impacts of volcanic eruptions on climate have improved in the last decades, uncertainties in the stratospheric aerosol forcing from large eruptions are now linked not only to visible optical depth estimates on a global scale but also to details on the size, latitude and altitude distributions of the stratospheric aerosols. Based on our understanding of these uncertainties, we propose a new model-based approach to generating a volcanic forcing for general circulation model (GCM) and chemistry–climate model (CCM) simulations. This new volcanic forcing, covering the 1600–present period, uses an aerosol microphysical model to provide a realistic, physically consistent treatment of the stratospheric sulfate aerosols. Twenty-six eruptions were modeled individually using the latest available ice cores aerosol mass estimates and historical data on the latitude and date of eruptions. The evolution of aerosol spatial and size distribution after the sulfur dioxide discharge are hence characterized for each volcanic eruption. Large variations are seen in hemispheric partitioning and size distributions in relation to location/date of eruptions and injected SO2 masses. Results for recent eruptions show reasonable agreement with observations. By providing these new estimates of spatial distributions of shortwave and long-wave radiative perturbations, this volcanic forcing may help to better constrain the climate model responses to volcanic eruptions in the 1600–present period. The final data set consists of 3-D values (with constant longitude) of spectrally resolved extinction coefficients, single scattering albedos and asymmetry factors calculated for different wavelength bands upon request. Surface area densities for heterogeneous chemistry are also provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current models of embryological development focus on intracellular processes such as gene expression and protein networks, rather than on the complex relationship between subcellular processes and the collective cellular organization these processes support. We have explored this collective behavior in the context of neocortical development, by modeling the expansion of a small number of progenitor cells into a laminated cortex with layer and cell type specific projections. The developmental process is steered by a formal language analogous to genomic instructions, and takes place in a physically realistic three-dimensional environment. A common genome inserted into individual cells control their individual behaviors, and thereby gives rise to collective developmental sequences in a biologically plausible manner. The simulation begins with a single progenitor cell containing the artificial genome. This progenitor then gives rise through a lineage of offspring to distinct populations of neuronal precursors that migrate to form the cortical laminae. The precursors differentiate by extending dendrites and axons, which reproduce the experimentally determined branching patterns of a number of different neuronal cell types observed in the cat visual cortex. This result is the first comprehensive demonstration of the principles of self-construction whereby the cortical architecture develops. In addition, our model makes several testable predictions concerning cell migration and branching mechanisms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: Processes occurring in the course of psychotherapy are characterized by the simple fact that they unfold in time and that the multiple factors engaged in change processes vary highly between individuals (idiographic phenomena). Previous research, however, has neglected the temporal perspective by its traditional focus on static phenomena, which were mainly assessed at the group level (nomothetic phenomena). To support a temporal approach, the authors introduce time-series panel analysis (TSPA), a statistical methodology explicitly focusing on the quantification of temporal, session-to-session aspects of change in psychotherapy. TSPA-models are initially built at the level of individuals and are subsequently aggregated at the group level, thus allowing the exploration of prototypical models. Method: TSPA is based on vector auto-regression (VAR), an extension of univariate auto-regression models to multivariate time-series data. The application of TSPA is demonstrated in a sample of 87 outpatient psychotherapy patients who were monitored by postsession questionnaires. Prototypical mechanisms of change were derived from the aggregation of individual multivariate models of psychotherapy process. In a 2nd step, the associations between mechanisms of change (TSPA) and pre- to postsymptom change were explored. Results: TSPA allowed a prototypical process pattern to be identified, where patient's alliance and self-efficacy were linked by a temporal feedback-loop. Furthermore, therapist's stability over time in both mastery and clarification interventions was positively associated with better outcomes. Conclusions: TSPA is a statistical tool that sheds new light on temporal mechanisms of change. Through this approach, clinicians may gain insight into prototypical patterns of change in psychotherapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: According to the theoretical model of Cranach, Ochsenbein, and Valach (1986) understanding group actions needs consideration of aspects at both the group level and the level of individual members. For example individual action units constituting group actions are motivated at the individual level while potentially being affected by characteristics of the group. Theoretically, group efficacy beliefs could be a part of this motivational process as they are an individual’s cognitive contents about group-level abilities to perform well in a specific task. Positive relations between group level efficacy-beliefs and group performance have been reported and Bandura and Locke (2003) argue that this relationship is being mediated by motivational processes and goal setting. The aims of this study were a) to examine the effects of group characteristics on individual performance motivation and b) to test if those are mediated by individual group efficacy beliefs. Methods: Forty-seven students (M=22.83 years, SD=2.83, 34% women) of the university of Berne participated in this scenario based experiment. Data were collected on two collection points. Subjects were provided information about fictive team members with whom they had to perform a group triathlon. Three values (low, medium, high) of the other team members’ abilities to perform in their parts of the triathlon (swimming and biking respectively) were combined in a 3x3 full factorial design (Anderson, 1982) yielding nine groups. Subjects were asked how confident they were that the teams would perform well in the task (individual group efficacy beliefs), and to provide information about their motivation to perform at their best in the respective group contexts (performance motivation). Multilevel modeling (Mplus) was used to estimate the effects of the factors swim and bike, and the context-varying covariate individual group efficacy beliefs on performance motivation. Further analyses were undertaken to test if the effects of group contexts on performance motivation are mediated by individual group efficacy beliefs. Results: Significant effects were reported for both the group characteristics (βswim = 7.86; βbike = 8.57; both p < .001) and the individual group efficacy beliefs (βigeb; .40, p < .001) on performance motivation. The subsequent mediation model indicated that the effects of group characteristics on performance motivation were partly mediated by the individual group efficacy beliefs of the subjects with significant mediation effects for both factors swim and bike. Discussion/Conclusion: The results of the study provide further support for the motivational character of efficacy beliefs and point out a mechanism by which team characteristics influence performance relevant factors at the level of individual team members. The study indicates that high team abilities lead to augmented performance motivation, adding a psychological advantage to teams already high on task relevant abilities. Future investigations will be aiming at possibilities to keep individual performance motivation high in groups with low task relevant abilities. One possibility could be the formulation of individual task goals. References: Anderson, N. H. (1982). Methods of information integration theory. New York: Academic Press. Bandura, A. & Locke, E. A. (2003). Negative self-efficacy and goal effects revisited. Journal of Applied Psychology, 88, 87-99. Cranach, M. von, Ochsenbein, G. & Valach, L. (1986). The group as a self-active system: Outline of a theory of group action. European Journal of Social Psychology, 16, 193-229.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Both theoretically and empirically there is a continuous interest in understanding the specific relation between cognitive and motor development in childhood. In the present longitudinal study including three measurement points, this relation was targeted. At the beginning of the study, the participating children were 5-6-year-olds. By assessing participants' fine motor skills, their executive functioning, and their non-verbal intelligence, their cross-sectional and cross-lagged interrelations were examined. Additionally, performance in these three areas was used to predict early school achievement (in terms of mathematics, reading, and spelling) at the end of participants' first grade. Correlational analyses and structural equation modeling revealed that fine motor skills, non-verbal intelligence and executive functioning were significantly interrelated. Both fine motor skills and intelligence had significant links to later school achievement. However, when executive functioning was additionally included into the prediction of early academic achievement, fine motor skills and non-verbal intelligence were no longer significantly associated with later school performance suggesting that executive functioning plays an important role for the motor-cognitive performance link.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two factors that have been suggested as key in explaining individual differences in fluid intelligence are working memory and sensory discrimination ability. A latent variable approach was used to explore the relative contributions of these two variables to individual differences in fluid intelligence in middle to late childhood. A sample of 263 children aged 7–12 years was examined. Correlational analyses showed that general discrimination ability (GDA)and working memory (WM) were related to each other and to fluid intelligence. Structural equation modeling showed that within both younger and older age groups and the sample as a whole, the relation between GDA and fluid intelligence could be accounted for by WM. While WM was able to predict variance in fluid intelligence above and beyond GDA, GDA was not able to explain significant amounts of variance in fluid intelligence, either in the whole sample or within the younger or older age group. We concluded that compared to GDA, WM should be considered the better predictor of individual differences in fluid intelligence in childhood. WM and fluid intelligence, while not being separable in middle childhood, develop at different rates, becoming more separable with age.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Four different literature parameterizations for the formation and evolution of urban secondary organic aerosol (SOA) frequently used in 3-D models are evaluated using a 0-D box model representing the Los Angeles metropolitan region during the California Research at the Nexus of Air Quality and Climate Change (CalNex) 2010 campaign. We constrain the model predictions with measurements from several platforms and compare predictions with particle- and gas-phase observations from the CalNex Pasadena ground site. That site provides a unique opportunity to study aerosol formation close to anthropogenic emission sources with limited recirculation. The model SOA that formed only from the oxidation of VOCs (V-SOA) is insufficient to explain the observed SOA concentrations, even when using SOA parameterizations with multi-generation oxidation that produce much higher yields than have been observed in chamber experiments, or when increasing yields to their upper limit estimates accounting for recently reported losses of vapors to chamber walls. The Community Multiscale Air Quality (WRF-CMAQ) model (version 5.0.1) provides excellent predictions of secondary inorganic particle species but underestimates the observed SOA mass by a factor of 25 when an older VOC-only parameterization is used, which is consistent with many previous model–measurement comparisons for pre-2007 anthropogenic SOA modules in urban areas. Including SOA from primary semi-volatile and intermediate-volatility organic compounds (P-S/IVOCs) following the parameterizations of Robinson et al. (2007), Grieshop et al. (2009), or Pye and Seinfeld (2010) improves model–measurement agreement for mass concentration. The results from the three parameterizations show large differences (e.g., a factor of 3 in SOA mass) and are not well constrained, underscoring the current uncertainties in this area. Our results strongly suggest that other precursors besides VOCs, such as P-S/IVOCs, are needed to explain the observed SOA concentrations in Pasadena. All the recent parameterizations overpredict urban SOA formation at long photochemical ages (3 days) compared to observations from multiple sites, which can lead to problems in regional and especially global modeling. However, reducing IVOC emissions by one-half in the model to better match recent IVOC measurements improves SOA predictions at these long photochemical ages. Among the explicitly modeled VOCs, the precursor compounds that contribute the greatest SOA mass are methylbenzenes. Measured polycyclic aromatic hydrocarbons (naphthalenes) contribute 0.7% of the modeled SOA mass. The amounts of SOA mass from diesel vehicles, gasoline vehicles, and cooking emissions are estimated to be 16–27, 35–61, and 19–35 %, respectively, depending on the parameterization used, which is consistent with the observed fossil fraction of urban SOA, 71(+-3) %. The relative contribution of each source is uncertain by almost a factor of 2 depending on the parameterization used. In-basin biogenic VOCs are predicted to contribute only a few percent to SOA. A regional SOA background of approximately 2.1 μgm-3 is also present due to the long-distance transport of highly aged OA, likely with a substantial contribution from regional biogenic SOA. The percentage of SOA from diesel vehicle emissions is the same, within the estimated uncertainty, as reported in previous work that analyzed the weekly cycles in OA concentrations (Bahreini et al., 2012; Hayes et al., 2013). However, the modeling work presented here suggests a strong anthropogenic source of modern carbon in SOA, due to cooking emissions, which was not accounted for in those previous studies and which is higher on weekends. Lastly, this work adapts a simple two-parameter model to predict SOA concentration and O/C from urban emissions. This model successfully predicts SOA concentration, and the optimal parameter combination is very similar to that found for Mexico City. This approach provides a computationally inexpensive method for predicting urban SOA in global and climate models. We estimate pollution SOA to account for 26 Tg yr-1 of SOA globally, or 17% of global SOA, one third of which is likely to be non-fossil.