918 resultados para dynamic factor models
Resumo:
Vegetation has a profound effect on flow and sediment transport processes in natural rivers, by increasing both skin friction and form drag. The increase in drag introduces a drag discontinuity between the in-canopy flow and the flow above, which leads to the development of an inflection point in the velocity profile, resembling a free shear layer. Therefore, drag acts as the primary driver for the entire canopy system. Most current numerical hydraulic models which incorporate vegetation rely either on simple, static plant forms, or canopy-scaled drag terms. However, it is suggested that these are insufficient as vegetation canopies represent complex, dynamic, porous blockages within the flow, which are subject to spatially and temporally dynamic drag forces. Here we present a dynamic drag methodology within a CFD framework. Preliminary results for a benchmark cylinder case highlight the accuracy of the method, and suggest its applicability to more complex cases.
Resumo:
The present study tests the relationships between the three frequently used personality models evaluated by the Temperament Character Inventory-Revised (TCI-R), Neuroticism Extraversion Openness Five Factor Inventory – Revised (NEO-FFI-R) and Zuckerman-Kuhlman Personality Questionnaire-50- Cross-Cultural (ZKPQ-50-CC). The results were obtained with a sample of 928 volunteer subjects from the general population aged between 17 and 28 years old. Frequency distributions and alpha reliabilities with the three instruments were acceptable. Correlational and factorial analyses showed that several scales in the three instruments share an appreciable amount of common variance. Five factors emerged from principal components analysis. The first factor was integrated by A (Agreeableness), Co (Cooperativeness) and Agg-Host (Aggressiveness-Hostility), with secondary loadings in C (Conscientiousness) and SD (Self-directiveness) from other factors. The second factor was composed by N (Neuroticism), N-Anx (Neuroticism-Anxiety), HA (Harm Avoidance) and SD (Self-directiveness). The third factor was integrated by Sy (Sociability), E (Extraversion), RD (Reward Dependence), ImpSS (Impulsive Sensation Seeking) and NS (novelty Seeking). The fourth factor was integrated by Ps (Persistence), Act (Activity), and C, whereas the fifth and last factor was composed by O (Openness) and ST (Self- Transcendence). Confirmatory factor analyses indicate that the scales in each model are highly interrelated and define the specified latent dimension well. Similarities and differences between these three instruments are further discussed.
Resumo:
The paper is motivated by the valuation problem of guaranteed minimum death benefits in various equity-linked products. At the time of death, a benefit payment is due. It may depend not only on the price of a stock or stock fund at that time, but also on prior prices. The problem is to calculate the expected discounted value of the benefit payment. Because the distribution of the time of death can be approximated by a combination of exponential distributions, it suffices to solve the problem for an exponentially distributed time of death. The stock price process is assumed to be the exponential of a Brownian motion plus an independent compound Poisson process whose upward and downward jumps are modeled by combinations (or mixtures) of exponential distributions. Results for exponential stopping of a Lévy process are used to derive a series of closed-form formulas for call, put, lookback, and barrier options, dynamic fund protection, and dynamic withdrawal benefit with guarantee. We also discuss how barrier options can be used to model lapses and surrenders.
Resumo:
Occupational exposure modeling is widely used in the context of the E.U. regulation on the registration, evaluation, authorization, and restriction of chemicals (REACH). First tier tools, such as European Centre for Ecotoxicology and TOxicology of Chemicals (ECETOC) targeted risk assessment (TRA) or Stoffenmanager, are used to screen a wide range of substances. Those of concern are investigated further using second tier tools, e.g., Advanced REACH Tool (ART). Local sensitivity analysis (SA) methods are used here to determine dominant factors for three models commonly used within the REACH framework: ECETOC TRA v3, Stoffenmanager 4.5, and ART 1.5. Based on the results of the SA, the robustness of the models is assessed. For ECETOC, the process category (PROC) is the most important factor. A failure to identify the correct PROC has severe consequences for the exposure estimate. Stoffenmanager is the most balanced model and decision making uncertainties in one modifying factor are less severe in Stoffenmanager. ART requires a careful evaluation of the decisions in the source compartment since it constitutes ∼75% of the total exposure range, which corresponds to an exposure estimate of 20-22 orders of magnitude. Our results indicate that there is a trade off between accuracy and precision of the models. Previous studies suggested that ART may lead to more accurate results in well-documented exposure situations. However, the choice of the adequate model should ultimately be determined by the quality of the available exposure data: if the practitioner is uncertain concerning two or more decisions in the entry parameters, Stoffenmanager may be more robust than ART.
Resumo:
Työssä tutkittiin kiekkosuodattimeen liittyviä ulkoisia simulointimalleja integroidussa simulointiympäristössä. Työn tarkoituksena oli parantaa olemassa olevaa mekanistista kiekkosuodatinmallia. Malli laadittiin dynaamiseen paperiteollisuuden tarpeisiin tehtyyn simulaattoriin (APMS), jossa olevaan alkuperäiseen mekanistiseen malliin tehtiin ulkoinen lisämalli, joka käyttää hyväkseen kiekkosuodatinvalmistajan mittaustuloksia. Laitetiedon saatavuutta suodattimien käyttäjille parannettiin luomalla Internetissä sijaitsevalle palvelimelle kiekkosuodattimen laitetietomäärittelyt. Suodatinvalmistaja voi palvella asiakkaitaan viemällä laitetiedot palvelimelle ja yhdistämällä laitetiedon simulointimalliin. Tämä on mahdollista Internetin ylitse käytettävän integroidun simulointiympäristön avulla, jonka on tarkoitus kokonaisvaltaisesti yhdistää simulointi ja prosessisuunnittelu. Suunnittelijalle tarjotaan työkalut, joilla dynaaminen simulointi, tasesimulointi ja kaavioiden piirtäminen onnistuu prosessilaitetiedon ollessa saatavilla. Nämä työkalut on tarkoitus toteuttaa projektissa nimeltä Galleria, jossa luodaan prosessimalli- ja laitetietopalvelin Internetiin. Gallerian käyttöliittymän avulla prosessisuunnittelija voi käyttää erilaisia simulointiohjelmistoja ja niihin luotuja valmiita malleja, sekä saada käsiinsä ajan tasalla olevaa laitetietoa. Ulkoinen kiekkosuodatinmalli laskee suodosvirtaamat ja suodosten pitoisuudet likaiselle, kirkkaalle ja superkirkkaalle suodokselle. Mallin syöttöparametrit ovat kiekkojen pyörimisnopeus, sisään tulevan syötön pitoisuus, suotautuvuus (freeness) ja säätöparametri, jolla säädetään likaisen ja kirkkaan suodoksen keskinäinen suhde. Suotautuvuus kertoo mistä massasta on kyse. Mitä suurempi suotautuvuus on, sitä paremmin massa suodattuu ja sitä puhtaampia suodokset yleensä ovat. Mallin parametrit viritettiin regressioanalyysillä ja valmistajan palautetta apuna käyttäen. Käyttäjä voi valita haluaako hän käyttää ulkoista vai alkuperäistä mallia. Alkuperäinen malli täytyy ensin alustaa antamalla sille nominaaliset toimintapisteet virtaamille ja pitoisuuksille tietyllä pyörimisnopeudella. Ulkoisen mallin yhtälöitä voi käyttää alkuperäisen mallin alustamiseen, jos alkuperäinen malli toimii ulkoista paremmin. Ulkoista mallia voi käyttää myös ilman simulointiohjelmaa Galleria-palvelimelta käsin. Käyttäjälle avautuu näin mahdollisuus tarkastella kiekkosuodattimien parametreja ja nähdä suotautumistulokset oman työasemansa ääreltä mistä tahansa, kunhan Internetyhteys on olemassa. Työn tuloksena kiekkosuodattimien laitetiedon saatavuus käyttäjille parani ja alkuperäisen simulointimallin rajoituksia ja puutteita vähennettiin.
Resumo:
Abstract Purpose: Several well-known managerial accounting performance measurement models rely on causal assumptions. Whilst users of the models express satisfaction and link them with improved organizational performance, academic research, of the realworld applications, shows few reliable statistical associations. This paper provides a discussion on the"problematic" of causality in a performance measurement setting. Design/methodology/approach: This is a conceptual study based on an analysis and synthesis of the literature from managerial accounting, organizational theory, strategic management and social scientific causal modelling. Findings: The analysis indicates that dynamic, complex and uncertain environments may challenge any reliance upon valid causal models. Due to cognitive limitations and judgmental biases, managers may fail to trace correct cause-and-effect understanding of the value creation in their organizations. However, even lacking this validity, causal models can support strategic learning and perform as organizational guides if they are able to mobilize managerial action. Research limitations/implications: Future research should highlight the characteristics necessary for elaboration of convincing and appealing causal models and the social process of their construction. Practical implications: Managers of organizations using causal models should be clear on the purposes of their particular models and their limitations. In particular, difficulties are observed in specifying detailed cause and effect relations and their potential for communicating and directing attention. They should therefore construct their models to suit the particular purpose envisaged. Originality/value: This paper provides an interdisciplinary and holistic view on the issue of causality in managerial accounting models.
Identification-commitment inventory (ICI-Model): confirmatory factor analysis and construct validity
Resumo:
The aim of this study is to confirm the factorial structure of the Identification-Commitment Inventory (ICI) developed within the frame of the Human System Audit (HSA) (Quijano et al. in Revist Psicol Soc Apl 10(2):27-61, 2000; Pap Psicól Revist Col Of Psicó 29:92-106, 2008). Commitment and identification are understood by the HSA at an individual level as part of the quality of human processes and resources in an organization; and therefore as antecedents of important organizational outcomes, such as personnel turnover intentions, organizational citizenship behavior, etc. (Meyer et al. in J Org Behav 27:665-683, 2006). The theoretical integrative model which underlies ICI Quijano et al. (2000) was tested in a sample (N = 625) of workers in a Spanish public hospital. Confirmatory factor analysis through structural equation modeling was performed. Elliptical least square solution was chosen as estimator procedure on account of non-normal distribution of the variables. The results confirm the goodness of fit of an integrative model, which underlies the relation between Commitment and Identification, although each one is operatively different.
Resumo:
Geophysical data may provide crucial information about hydrological properties, states, and processes that are difficult to obtain by other means. Large data sets can be acquired over widely different scales in a minimally invasive manner and at comparatively low costs, but their effective use in hydrology makes it necessary to understand the fidelity of geophysical models, the assumptions made in their construction, and the links between geophysical and hydrological properties. Geophysics has been applied for groundwater prospecting for almost a century, but it is only in the last 20 years that it is regularly used together with classical hydrological data to build predictive hydrological models. A largely unexplored venue for future work is to use geophysical data to falsify or rank competing conceptual hydrological models. A promising cornerstone for such a model selection strategy is the Bayes factor, but it can only be calculated reliably when considering the main sources of uncertainty throughout the hydrogeophysical parameter estimation process. Most classical geophysical imaging tools tend to favor models with smoothly varying property fields that are at odds with most conceptual hydrological models of interest. It is thus necessary to account for this bias or use alternative approaches in which proposed conceptual models are honored at all steps in the model building process.
Resumo:
Tutkielman tarkoituksena on tutkia case-organisaationa toimivan Itäisen tullipiirin strategista uudistumiskykyä. Minkälaiset lähtökohdat organisaatiolla on kohdata tulevaisuuden haasteet omassa toimintaympäristössään ja minkälaisia esteitä uudistumiselle löytyy? Pärjätäkseen kiristyvässä kilpailussa on uudistumiseen vaikuttavien tekijöiden, kuten osaamisen, tiedon kulun, johtamisen ja suhteiden tunnistaminen ja hyödyntäminen ensiarvoisen tärkeää myös julkishallinnon organisaatioille. Tässä tutkielmassa uudistumiskykyä tarkastellaan kolmiulotteisen organisaatiomallin (mekaaninen, orgaaninen ja dynaaminen) valossa ja kehittämistoimien lähtökohtana pidetään organisaation omaa strategista fokusta. Tutkimus- ja tiedonkeruumenetelminä käytetään kvantitatiiviseksi luokiteltavaa, sähköisessä muodossa tehtävää KM-factor -kyselyä ja kvalitatiivista teemahaastattelua. Tutkimustulokset antavat strategisesti tärkeää tietoa case-organisaation nykytilasta; sen heikkouksista ja vahvuuksista. Tulosten perusteella organisaation toimintatapa on melko yhtenäinen ja strategisen fokuksensa, eli orgaanisen toimintaympäristön vaatimusten mukainen. Kehittämistoimia tulee kuitenkin kohdentaa erityisesti henkilöstön strategian mukaisen osaamisen ja esimiesten tiedon kulun lisäämiseen sekä yleisesti työmotivaatiotason nostamiseen koko kohdeorganisaatiossa. Organisaatiossa on luotava käytäntöjä, jotka tukevat avoimen tiedon kulun ilmapiiriä ja dialogimaista kommunikointia, jotta organisaation uudistumiskyky yhtenäisenä systeeminä parantuisi entisestään.
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.
Resumo:
Streptavidin, a tetrameric protein secreted by Streptomyces avidinii, binds tightly to a small growth factor biotin. One of the numerous applications of this high-affinity system comprises the streptavidin-coated surfaces of bioanalytical assays which serve as universal binders for straightforward immobilization of any biotinylated molecule. Proteins can be immobilized with a lower risk of denaturation using streptavidin-biotin technology in contrast to direct passive adsorption. The purpose of this study was to characterize the properties and effects of streptavidin-coated binding surfaces on the performance of solid-phase immunoassays and to investigate the contributions of surface modifications. Various characterization tools and methods established in the study enabled the convenient monitoring and binding capacity determination of streptavidin-coated surfaces. The schematic modeling of the monolayer surface and the quantification of adsorbed streptavidin disclosed the possibilities and the limits of passive adsorption. The defined yield of 250 ng/cm2 represented approximately 65 % coverage compared with a modelled complete monolayer, which is consistent with theoretical surface models. Modifications such as polymerization and chemical activation of streptavidin resulted in a close to 10-fold increase in the biotin-binding densities of the surface compared with the regular streptavidin coating. In addition, the stability of the surface against leaching was improved by chemical modification. The increased binding densities and capacities enabled wider high-end dynamic ranges in the solid-phase immunoassays, especially when using the fragments of the capture antibodies instead of intact antibodies for the binding of the antigen. The binding capacity of the streptavidin surface was not, by definition, predictive of the low-end performance of the immunoassays nor the assay sensitivity. Other features such as non-specific binding, variation and leaching turned out to be more relevant. The immunoassays that use a direct surface readout measurement of time-resolved fluorescence from a washed surface are dependent on the density of the labeled antibodies in a defined area on the surface. The binding surface was condensed into a spot by coating streptavidin in liquid droplets into special microtiter wells holding a small circular indentation at the bottom. The condensed binding area enabled a denser packing of the labeled antibodies on the surface. This resulted in a 5 - 6-fold increase in the signal-to-background ratios and an equivalent improvement in the detection limits of the solid-phase immunoassays. This work proved that the properties of the streptavidin-coated surfaces can be modified and that the defined properties of the streptavidin-based immunocapture surfaces contribute to the performance of heterogeneous immunoassays.
Resumo:
The purpose of this study was to increase the understanding of the role and nature of trust in asymmetric technology partnership formation. In the knowledge-based "learning race" knowledge is considered as a primary source for competitive advantage. In the emerging ICT sector the high pace of technological change, the convergence of technologies and industries as well as the increasing complexity and uncertainty have forced even the largest players to seek cooperation for complementary knowledge and capabilities. Small technology firms need the complementary resources and legitimacy of the large firms to grow and compete in the global market place. Most of the earlier research indicates, however, that partnerships with asymmetric size, managerial resources and cultures have failed. A basic assumption supported by earlier research was that trust is a critical factor in asymmetric technology partnership formation. Asymmetric technology partnership formation is a dynamic and multi-dimensional process, and consequently a holistic research approach was selected. Research issue was approached from different levels: the individual decision-maker, the firm and the relationship between the parties. Also the impact of the dynamic environment and technology content was analyzed. A multitheoretical approach and a qualitative research method with in-depth interviews in five large ICT companies and eight small ICT companies enabled a holistic and rich view of the research issue. Study contributes on the scarce understanding on the nature and evolution of trust in asymmetric technology partnership formation. It sheds also light on the specific nature of asymmetric technology partnerships. The partnerships were found to be tentative and the diverse strategic intent of small and large technology firms appeared as a major challenge. The role of the boundary spanner was highlighted as a possibility to match the incompatible organizational cultures. A shared vision was found to be a pre-condition for individual-based fast trust leading to intuitive decision-making and experimentation. The relationships were tentative and they were continuously re-evaluated through the key actors' sense making of the technology content, asymmetry and the dynamic environment. A multi-dimensional conceptualization for trust was created and propositions on the role and nature of trust for further research are given. The purpose of this study was to increase the understanding of the role and nature of trust in asymmetric technology partnership formation. In the knowledge-based "learning race" knowledge is considered as a primary source for competitive advantage. In the emerging ICT sector the high pace of technological change, the convergence of technologies and industries as well as the increasing complexity and uncertainty have forced even the largest players to seek cooperation for complementary knowledge and capabilities. Small technology firms need the complementary resources and legitimacy of the large firms to grow and compete in the global market place. Most of the earlier research indicates, however, that partnerships with asymmetric size, managerial resources and cultures have failed. A basic assumption supported by earlier research was that trust is a critical factor in asymmetric technology partnership formation. Asymmetric technology partnership formation is a dynamic and multi-dimensional process, and consequently a holistic research approach was selected. Research issue was approached from different levels: the individual decision-maker, the firm and the relationship between the parties. Also the impact of the dynamic environment and technology content was analyzed. A multitheoretical approach and a qualitative research method with in-depth interviews in five large ICT companies and eight small ICT companies enabled a holistic and rich view of the research issue. Study contributes on the scarce understanding on the nature and evolution of trust in asymmetric technology partnership formation. It sheds also light on the specific nature of asymmetric technology partnerships. The partnerships were found to be tentative and the diverse strategic intent of small and large technology firms appeared as a major challenge. The role of the boundary spanner was highlighted as a possibility to match the incompatible organizational cultures. A shared vision was found to be a pre-condition for individual-based fast trust leading to intuitive decision-making and experimentation. The relationships were tentative and they were continuously re-evaluated through the key actors' sense making of the technology content, asymmetry and the dynamic environment. A multi-dimensional conceptualization for trust was created and propositions on the role and nature of trust for further research are given.
Resumo:
The objective of this study is to show that bone strains due to dynamic mechanical loading during physical activity can be analysed using the flexible multibody simulation approach. Strains within the bone tissue play a major role in bone (re)modeling. Based on previous studies, it has been shown that dynamic loading seems to be more important for bone (re)modeling than static loading. The finite element method has been used previously to assess bone strains. However, the finite element method may be limited to static analysis of bone strains due to the expensive computation required for dynamic analysis, especially for a biomechanical system consisting of several bodies. Further, in vivo implementation of strain gauges on the surfaces of bone has been used previously in order to quantify the mechanical loading environment of the skeleton. However, in vivo strain measurement requires invasive methodology, which is challenging and limited to certain regions of superficial bones only, such as the anterior surface of the tibia. In this study, an alternative numerical approach to analyzing in vivo strains, based on the flexible multibody simulation approach, is proposed. In order to investigate the reliability of the proposed approach, three 3-dimensional musculoskeletal models where the right tibia is assumed to be flexible, are used as demonstration examples. The models are employed in a forward dynamics simulation in order to predict the tibial strains during walking on a level exercise. The flexible tibial model is developed using the actual geometry of the subject’s tibia, which is obtained from 3 dimensional reconstruction of Magnetic Resonance Images. Inverse dynamics simulation based on motion capture data obtained from walking at a constant velocity is used to calculate the desired contraction trajectory for each muscle. In the forward dynamics simulation, a proportional derivative servo controller is used to calculate each muscle force required to reproduce the motion, based on the desired muscle contraction trajectory obtained from the inverse dynamics simulation. Experimental measurements are used to verify the models and check the accuracy of the models in replicating the realistic mechanical loading environment measured from the walking test. The predicted strain results by the models show consistency with literature-based in vivo strain measurements. In conclusion, the non-invasive flexible multibody simulation approach may be used as a surrogate for experimental bone strain measurement, and thus be of use in detailed strain estimation of bones in different applications. Consequently, the information obtained from the present approach might be useful in clinical applications, including optimizing implant design and devising exercises to prevent bone fragility, accelerate fracture healing and reduce osteoporotic bone loss.
Resumo:
PURPOSE: This study aims to identify which aspects of the pupil light reflex are most influenced by rods and cones independently by analyzing pupil recordings from different mouse models of photoreceptor deficiency. METHODS: One-month-old wild type (WT), rodless (Rho-/-), coneless (Cnga3-/-), or photoreceptor less (Cnga3-/-; Rho-/- or Gnat1-/-) mice were subjected to brief red and blue light stimuli of increasing intensity. To describe the initial dynamic response to light, the maximal pupillary constriction amplitudes and the derivative curve of the first 3 seconds were determined. To estimate the postillumination phase, the constriction amplitude at 9.5 seconds after light termination was related to the maximal constriction amplitude. RESULTS: Rho-/- mice showed decreased constriction amplitude but more prolonged pupilloconstriction to all blue and red light stimuli compared to wild type mice. Cnga3-/- mice had constriction amplitudes similar to WT however following maximal constriction, the early and rapid dilation to low intensity blue light was decreased. To high intensity blue light, the Cnga3-/- mice demonstrated marked prolongation of the pupillary constriction. Cnga3-/-; Rho-/- mice had no pupil response to red light of low and medium intensity. CONCLUSIONS: From specific gene defective mouse models which selectively voided the rod or cone function, we determined that mouse rod photoreceptors are highly contributing to the pupil response to blue light stimuli but also to low and medium red stimuli. We also observed that cone cells mainly drive the partial rapid dilation of the initial response to low blue light stimuli. Thus photoreceptor dysfunction can be derived from chromatic pupillometry in mouse models.
Resumo:
Many educators and educational institutions have yet to integrate web-based practices into their classrooms and curricula. As a result, it can be difficult to prototype and evaluate approaches to transforming classrooms from static endpoints to dynamic, content-creating nodes in the online information ecosystem. But many scholastic journalism programs have already embraced the capabilities of the Internet for virtual collaboration, dissemination, and reader participation. Because of this, scholastic journalism can act as a test-bed for integrating web-based sharing and collaboration practices into classrooms. Student Journalism 2.0 was a research project to integrate open copyright licenses into two scholastic journalism programs, to document outcomes, and to identify recommendations and remaining challenges for similar integrations. Video and audio recordings of two participating high school journalism programs informed the research. In describing the steps of our integration process, we note some important legal, technical, and social challenges. Legal worries such as uncertainty over copyright ownership could lead districts and administrators to disallow open licensing of student work. Publication platforms among journalism classrooms are far from standardized, making any integration of new technologies and practices difficult to achieve at scale. And teachers and students face challenges re-conceptualizing the role their class work can play online.