995 resultados para Diffusion Models
Resumo:
The diffusion of mobile telephony began in 1971 in Finland, when the first car phones, called ARP1 were taken to use. Technologies changed from ARP to NMT and later to GSM. The main application of the technology, however, was voice transfer. The birth of the Internet created an open public data network and easy access to other types of computer-based services over networks. Telephones had been used as modems, but the development of the cellular technologies enabled automatic access from mobile phones to Internet. Also other wireless technologies, for instance Wireless LANs, were also introduced. Telephony had developed from analog to digital in fixed networks and allowed easy integration of fixed and mobile networks. This development opened a completely new functionality to computers and mobile phones. It also initiated the merger of the information technology (IT) and telecommunication (TC) industries. Despite the arising opportunity for firms' new competition the applications based on the new functionality were rare. Furthermore, technology development combined with innovation can be disruptive to industries. This research focuses on the new technology's impact on competition in the ICT industry through understanding the strategic needs and alternative futures of the industry's customers. The change speed inthe ICT industry is high and therefore it was valuable to integrate the DynamicCapability view of the firm in this research. Dynamic capabilities are an application of the Resource-Based View (RBV) of the firm. As is stated in the literature, strategic positioning complements RBV. This theoretical framework leads theresearch to focus on three areas: customer strategic innovation and business model development, external future analysis, and process development combining these two. The theoretical contribution of the research is in the development of methodology integrating theories of the RBV, dynamic capabilities and strategic positioning. The research approach has been constructive due to the actual managerial problems initiating the study. The requirement for iterative and innovative progress in the research supported the chosen research approach. The study applies known methods in product development, for instance, innovation process in theGroup Decision Support Systems (GDSS) laboratory and Quality Function Deployment (QFD), and combines them with known strategy analysis tools like industry analysis and scenario method. As the main result, the thesis presents the strategic innovation process, where new business concepts are used to describe the alternative resource configurations and scenarios as alternative competitive environments, which can be a new way for firms to achieve competitive advantage in high-velocity markets. In addition to the strategic innovation process as a result, thestudy has also resulted in approximately 250 new innovations for the participating firms, reduced technology uncertainty and helped strategic infrastructural decisions in the firms, and produced a knowledge-bank including data from 43 ICT and 19 paper industry firms between the years 1999 - 2004. The methods presentedin this research are also applicable to other industries.
Resumo:
PURPOSE: To determine whether a mono-, bi- or tri-exponential model best fits the intravoxel incoherent motion (IVIM) diffusion-weighted imaging (DWI) signal of normal livers. MATERIALS AND METHODS: The pilot and validation studies were conducted in 38 and 36 patients with normal livers, respectively. The DWI sequence was performed using single-shot echoplanar imaging with 11 (pilot study) and 16 (validation study) b values. In each study, data from all patients were used to model the IVIM signal of normal liver. Diffusion coefficients (Di ± standard deviations) and their fractions (fi ± standard deviations) were determined from each model. The models were compared using the extra sum-of-squares test and information criteria. RESULTS: The tri-exponential model provided a better fit than both the bi- and mono-exponential models. The tri-exponential IVIM model determined three diffusion compartments: a slow (D1 = 1.35 ± 0.03 × 10(-3) mm(2)/s; f1 = 72.7 ± 0.9 %), a fast (D2 = 26.50 ± 2.49 × 10(-3) mm(2)/s; f2 = 13.7 ± 0.6 %) and a very fast (D3 = 404.00 ± 43.7 × 10(-3) mm(2)/s; f3 = 13.5 ± 0.8 %) diffusion compartment [results from the validation study]. The very fast compartment contributed to the IVIM signal only for b values ≤15 s/mm(2) CONCLUSION: The tri-exponential model provided the best fit for IVIM signal decay in the liver over the 0-800 s/mm(2) range. In IVIM analysis of normal liver, a third very fast (pseudo)diffusion component might be relevant. KEY POINTS: ? For normal liver, tri-exponential IVIM model might be superior to bi-exponential ? A very fast compartment (D = 404.00 ± 43.7 × 10 (-3) mm (2) /s; f = 13.5 ± 0.8 %) is determined from the tri-exponential model ? The compartment contributes to the IVIM signal only for b ≤ 15 s/mm (2.)
Resumo:
Työn päätavoitteena oli selvittää hinnan ja kilpailutilanteen vaikutusta matkaviestinnän diffuusioon. Työn empiirinen osuus tarkasteli matkapuhelinliittymien hinnan vaikutusta liittymien diffuusioon sekä sitä, miten alan kilpailu on vaikuttanut matkaviestinnän hintatasoon. Työssä analysoitiin myös matkaviestinnän kilpailutilannetta Suomen markkinoilla. Tutkimuksen empiirinen aineisto kerättiin toissijaisista lähteistä, esimerkiksi EMC-tietokannasta. Tutkimus oli luonteeltaan kvantitatiivinen.Empiirisessä osassa käytetyt mallit oli muodostettu aikaisempien tutkimuksien perusteella. Regressioanalyysiä käytettiin arvioitaessa hinnan vaikutusta diffuusionopeuteen ja mahdollisten omaksujien määrään. Regressioanalyysissä sovellettiin ei-lineaarista mallia.Tutkimustulokset osoittivat, että tasaisesti laskevilla matkapuhelinliittymien sekä matkapuhelimien hinnoilla ei ole merkittävää vaikutusta matkaviestinnän diffuusioon. Myöskään kilpailutilanne ei ole vaikuttanut paljon matkaviestinnän yleiseen hintatasoon. Työn tulosten perusteella voitiin antaa myös muutamia toimenpide-ehdotuksia jatkotutkimuksia varten.
Resumo:
Focal epilepsy is increasingly recognized as the result of an altered brain network, both on the structural and functional levels and the characterization of these widespread brain alterations is crucial for our understanding of the clinical manifestation of seizure and cognitive deficits as well as for the management of candidates to epilepsy surgery. Tractography based on Diffusion Tensor Imaging allows non-invasive mapping of white matter tracts in vivo. Recently, diffusion spectrum imaging (DSI), based on an increased number of diffusion directions and intensities, has improved the sensitivity of tractography, notably with respect to the problem of fiber crossing and recent developments allow acquisition times compatible with clinical application. We used DSI and parcellation of the gray matter in regions of interest to build whole-brain connectivity matrices describing the mutual connections between cortical and subcortical regions in patients with focal epilepsy and healthy controls. In addition, the high angular and radial resolution of DSI allowed us to evaluate also some of the biophysical compartment models, to better understand the cause of the changes in diffusion anisotropy. Global connectivity, hub architecture and regional connectivity patterns were altered in TLE patients and showed different characteristics in RTLE vs LTLE with stronger abnormalities in RTLE. The microstructural analysis suggested that disturbed axonal density contributed more than fiber orientation to the connectivity changes affecting the temporal lobes whereas fiber orientation changes were more involved in extratemporal lobe changes. Our study provides further structural evidence that RTLE and LTLE are not symmetrical entities and DSI-based imaging could help investigate the microstructural correlate of these imaging abnormalities.
Resumo:
It is well known that the Neolithic transition spread across Europe at a speed of about 1 km/yr. This result has been previously interpreted as a range expansion of the Neolithic driven mainly by demic diffusion (whereas cultural diffusion played a secondary role). However, a long-standing problem is whether this value (1 km/yr) and its interpretation (mainly demic diffusion) are characteristic only of Europe or universal (i.e. intrinsic features of Neolithic transitions all over the world). So far Neolithic spread rates outside Europe have been barely measured, and Neolithic spread rates substantially faster than 1 km/yr have not been previously reported. Here we show that the transition from hunting and gathering into herding in southern Africa spread at a rate of about 2.4 km/yr, i.e. about twice faster than the European Neolithic transition. Thus the value 1 km/yr is not a universal feature of Neolithic transitions in the world. Resorting to a recent demic-cultural wave-of-advance model, we also find that the main mechanism at work in the southern African Neolithic spread was cultural diffusion (whereas demic diffusion played a secondary role). This is in sharp contrast to the European Neolithic. Our results further suggest that Neolithic spread rates could be mainly driven by cultural diffusion in cases where the final state of this transition is herding/pastoralism (such as in southern Africa) rather than farming and stockbreeding (as in Europe)
Resumo:
The partial replacement of NaCl by KCl is a promising alternative to produce a cheese with lower sodium content since KCl does not change the final quality of the cheese product. In order to assure proper salt proportions, mathematical models are employed to control the product process and simulate the multicomponent diffusion during the reduced salt cheese ripening period. The generalized Fick's Second Law is widely accepted as the primary mass transfer model within solid foods. The Finite Element Method (FEM) was used to solve the system of differential equations formed. Therefore, a NaCl and KCl multicomponent diffusion was simulated using a 20% (w/w) static brine with 70% NaCl and 30% KCl during Prato cheese (a Brazilian semi-hard cheese) salting and ripening. The theoretical results were compared with experimental data, and indicated that the deviation was 4.43% for NaCl and 4.72% for KCl validating the proposed model for the production of good quality, reduced-sodium cheeses.
Resumo:
This thesis concerns the analysis of epidemic models. We adopt the Bayesian paradigm and develop suitable Markov Chain Monte Carlo (MCMC) algorithms. This is done by considering an Ebola outbreak in the Democratic Republic of Congo, former Zaïre, 1995 as a case of SEIR epidemic models. We model the Ebola epidemic deterministically using ODEs and stochastically through SDEs to take into account a possible bias in each compartment. Since the model has unknown parameters, we use different methods to estimate them such as least squares, maximum likelihood and MCMC. The motivation behind choosing MCMC over other existing methods in this thesis is that it has the ability to tackle complicated nonlinear problems with large number of parameters. First, in a deterministic Ebola model, we compute the likelihood function by sum of square of residuals method and estimate parameters using the LSQ and MCMC methods. We sample parameters and then use them to calculate the basic reproduction number and to study the disease-free equilibrium. From the sampled chain from the posterior, we test the convergence diagnostic and confirm the viability of the model. The results show that the Ebola model fits the observed onset data with high precision, and all the unknown model parameters are well identified. Second, we convert the ODE model into a SDE Ebola model. We compute the likelihood function using extended Kalman filter (EKF) and estimate parameters again. The motivation of using the SDE formulation here is to consider the impact of modelling errors. Moreover, the EKF approach allows us to formulate a filtered likelihood for the parameters of such a stochastic model. We use the MCMC procedure to attain the posterior distributions of the parameters of the SDE Ebola model drift and diffusion parts. In this thesis, we analyse two cases: (1) the model error covariance matrix of the dynamic noise is close to zero , i.e. only small stochasticity added into the model. The results are then similar to the ones got from deterministic Ebola model, even if methods of computing the likelihood function are different (2) the model error covariance matrix is different from zero, i.e. a considerable stochasticity is introduced into the Ebola model. This accounts for the situation where we would know that the model is not exact. As a results, we obtain parameter posteriors with larger variances. Consequently, the model predictions then show larger uncertainties, in accordance with the assumption of an incomplete model.
Resumo:
The GARCH and Stochastic Volatility paradigms are often brought into conflict as two competitive views of the appropriate conditional variance concept : conditional variance given past values of the same series or conditional variance given a larger past information (including possibly unobservable state variables). The main thesis of this paper is that, since in general the econometrician has no idea about something like a structural level of disaggregation, a well-written volatility model should be specified in such a way that one is always allowed to reduce the information set without invalidating the model. To this respect, the debate between observable past information (in the GARCH spirit) versus unobservable conditioning information (in the state-space spirit) is irrelevant. In this paper, we stress a square-root autoregressive stochastic volatility (SR-SARV) model which remains true to the GARCH paradigm of ARMA dynamics for squared innovations but weakens the GARCH structure in order to obtain required robustness properties with respect to various kinds of aggregation. It is shown that the lack of robustness of the usual GARCH setting is due to two very restrictive assumptions : perfect linear correlation between squared innovations and conditional variance on the one hand and linear relationship between the conditional variance of the future conditional variance and the squared conditional variance on the other hand. By relaxing these assumptions, thanks to a state-space setting, we obtain aggregation results without renouncing to the conditional variance concept (and related leverage effects), as it is the case for the recently suggested weak GARCH model which gets aggregation results by replacing conditional expectations by linear projections on symmetric past innovations. Moreover, unlike the weak GARCH literature, we are able to define multivariate models, including higher order dynamics and risk premiums (in the spirit of GARCH (p,p) and GARCH in mean) and to derive conditional moment restrictions well suited for statistical inference. Finally, we are able to characterize the exact relationships between our SR-SARV models (including higher order dynamics, leverage effect and in-mean effect), usual GARCH models and continuous time stochastic volatility models, so that previous results about aggregation of weak GARCH and continuous time GARCH modeling can be recovered in our framework.
Resumo:
Un fichier intitulé Charbonneau_Nathalie_2008_AnimationAnnexeT accompagne la thèse. Il contient une séquence animée démontrant le type de parcours pouvant être effectué au sein des environnements numériques développés. Il s'agit d'un fichier .wmv qui a été compressé.
Resumo:
La prise de décision est un processus computationnel fondamental dans de nombreux aspects du comportement animal. Le modèle le plus souvent rencontré dans les études portant sur la prise de décision est appelé modèle de diffusion. Depuis longtemps, il explique une grande variété de données comportementales et neurophysiologiques dans ce domaine. Cependant, un autre modèle, le modèle d’urgence, explique tout aussi bien ces mêmes données et ce de façon parcimonieuse et davantage encrée sur la théorie. Dans ce travail, nous aborderons tout d’abord les origines et le développement du modèle de diffusion et nous verrons comment il a été établi en tant que cadre de travail pour l’interprétation de la plupart des données expérimentales liées à la prise de décision. Ce faisant, nous relèveront ses points forts afin de le comparer ensuite de manière objective et rigoureuse à des modèles alternatifs. Nous réexaminerons un nombre d’assomptions implicites et explicites faites par ce modèle et nous mettrons alors l’accent sur certains de ses défauts. Cette analyse servira de cadre à notre introduction et notre discussion du modèle d’urgence. Enfin, nous présenterons une expérience dont la méthodologie permet de dissocier les deux modèles, et dont les résultats illustrent les limites empiriques et théoriques du modèle de diffusion et démontrent en revanche clairement la validité du modèle d'urgence. Nous terminerons en discutant l'apport potentiel du modèle d'urgence pour l'étude de certaines pathologies cérébrales, en mettant l'accent sur de nouvelles perspectives de recherche.
Resumo:
Esta tesis está dividida en dos partes: en la primera parte se presentan y estudian los procesos telegráficos, los procesos de Poisson con compensador telegráfico y los procesos telegráficos con saltos. El estudio presentado en esta primera parte incluye el cálculo de las distribuciones de cada proceso, las medias y varianzas, así como las funciones generadoras de momentos entre otras propiedades. Utilizando estas propiedades en la segunda parte se estudian los modelos de valoración de opciones basados en procesos telegráficos con saltos. En esta parte se da una descripción de cómo calcular las medidas neutrales al riesgo, se encuentra la condición de no arbitraje en este tipo de modelos y por último se calcula el precio de las opciones Europeas de compra y venta.
Resumo:
In this paper, the available potential energy (APE) framework of Winters et al. (J. Fluid Mech., vol. 289, 1995, p. 115) is extended to the fully compressible Navier– Stokes equations, with the aims of clarifying (i) the nature of the energy conversions taking place in turbulent thermally stratified fluids; and (ii) the role of surface buoyancy fluxes in the Munk & Wunsch (Deep-Sea Res., vol. 45, 1998, p. 1977) constraint on the mechanical energy sources of stirring required to maintain diapycnal mixing in the oceans. The new framework reveals that the observed turbulent rate of increase in the background gravitational potential energy GPEr , commonly thought to occur at the expense of the diffusively dissipated APE, actually occurs at the expense of internal energy, as in the laminar case. The APE dissipated by molecular diffusion, on the other hand, is found to be converted into internal energy (IE), similar to the viscously dissipated kinetic energy KE. Turbulent stirring, therefore, does not introduce a new APE/GPEr mechanical-to-mechanical energy conversion, but simply enhances the existing IE/GPEr conversion rate, in addition to enhancing the viscous dissipation and the entropy production rates. This, in turn, implies that molecular diffusion contributes to the dissipation of the available mechanical energy ME =APE +KE, along with viscous dissipation. This result has important implications for the interpretation of the concepts of mixing efficiency γmixing and flux Richardson number Rf , for which new physically based definitions are proposed and contrasted with previous definitions. The new framework allows for a more rigorous and general re-derivation from the first principles of Munk & Wunsch (1998, hereafter MW98)’s constraint, also valid for a non-Boussinesq ocean: G(KE) ≈ 1 − ξ Rf ξ Rf Wr, forcing = 1 + (1 − ξ )γmixing ξ γmixing Wr, forcing , where G(KE) is the work rate done by the mechanical forcing, Wr, forcing is the rate of loss of GPEr due to high-latitude cooling and ξ is a nonlinearity parameter such that ξ =1 for a linear equation of state (as considered by MW98), but ξ <1 otherwise. The most important result is that G(APE), the work rate done by the surface buoyancy fluxes, must be numerically as large as Wr, forcing and, therefore, as important as the mechanical forcing in stirring and driving the oceans. As a consequence, the overall mixing efficiency of the oceans is likely to be larger than the value γmixing =0.2 presently used, thereby possibly eliminating the apparent shortfall in mechanical stirring energy that results from using γmixing =0.2 in the above formula.
Resumo:
We present a novel kinetic multi-layer model that explicitly resolves mass transport and chemical reaction at the surface and in the bulk of aerosol particles (KM-SUB). The model is based on the PRA framework of gas-particle interactions (Poschl-Rudich-Ammann, 2007), and it includes reversible adsorption, surface reactions and surface-bulk exchange as well as bulk diffusion and reaction. Unlike earlier models, KM-SUB does not require simplifying assumptions about steady-state conditions and radial mixing. The temporal evolution and concentration profiles of volatile and non-volatile species at the gas-particle interface and in the particle bulk can be modeled along with surface concentrations and gas uptake coefficients. In this study we explore and exemplify the effects of bulk diffusion on the rate of reactive gas uptake for a simple reference system, the ozonolysis of oleic acid particles, in comparison to experimental data and earlier model studies. We demonstrate how KM-SUB can be used to interpret and analyze experimental data from laboratory studies, and how the results can be extrapolated to atmospheric conditions. In particular, we show how interfacial and bulk transport, i.e., surface accommodation, bulk accommodation and bulk diffusion, influence the kinetics of the chemical reaction. Sensitivity studies suggest that in fine air particulate matter oleic acid and compounds with similar reactivity against ozone (carbon-carbon double bonds) can reach chemical lifetimes of many hours only if they are embedded in a (semi-)solid matrix with very low diffusion coefficients (< 10(-10) cm(2) s(-1)). Depending on the complexity of the investigated system, unlimited numbers of volatile and non-volatile species and chemical reactions can be flexibly added and treated with KM-SUB. We propose and intend to pursue the application of KM-SUB as a basis for the development of a detailed master mechanism of aerosol chemistry as well as for the derivation of simplified but realistic parameterizations for large-scale atmospheric and climate models.
Resumo:
We present a novel kinetic multi-layer model that explicitly resolves mass transport and chemical reaction at the surface and in the bulk of aerosol particles (KM-SUB). The model is based on the PRA framework of gas–particle interactions (P¨oschl et al., 5 2007), and it includes reversible adsorption, surface reactions and surface-bulk exchange as well as bulk diffusion and reaction. Unlike earlier models, KM-SUB does not require simplifying assumptions about steady-state conditions and radial mixing. The temporal evolution and concentration profiles of volatile and non-volatile species at the gas-particle interface and in the particle bulk can be modeled along with surface 10 concentrations and gas uptake coefficients. In this study we explore and exemplify the effects of bulk diffusion on the rate of reactive gas uptake for a simple reference system, the ozonolysis of oleic acid particles, in comparison to experimental data and earlier model studies. We demonstrate how KM-SUB can be used to interpret and analyze experimental data from laboratory stud15 ies, and how the results can be extrapolated to atmospheric conditions. In particular, we show how interfacial transport and bulk transport, i.e., surface accommodation, bulk accommodation and bulk diffusion, influence the kinetics of the chemical reaction. Sensitivity studies suggest that in fine air particulate matter oleic acid and compounds with similar reactivity against ozone (C=C double bonds) can reach chemical lifetimes of 20 multiple hours only if they are embedded in a (semi-)solid matrix with very low diffusion coefficients (10−10 cm2 s−1). Depending on the complexity of the investigated system, unlimited numbers of volatile and non-volatile species and chemical reactions can be flexibly added and treated with KM-SUB. We propose and intend to pursue the application of KM-SUB 25 as a basis for the development of a detailed master mechanism of aerosol chemistry as well as for the derivation of simplified but realistic parameterizations for large-scale atmospheric and climate models.
Resumo:
Accurate replication of the processes associated with the energetics of the tropical ocean is necessary if coupled GCMs are to simulate the physics of ENSO correctly, including the transfer of energy from the winds to the ocean thermocline and energy dissipation during the ENSO cycle. Here, we analyze ocean energetics in coupled GCMs in terms of two integral parameters describing net energy loss in the system using the approach recently proposed by Brown and Fedorov (J Clim 23:1563–1580, 2010a) and Fedorov (J Clim 20:1108–1117, 2007). These parameters are (1) the efficiency c of the conversion of wind power into the buoyancy power that controls the rate of change of the available potential energy (APE) in the ocean and (2) the e-folding rate a that characterizes the damping of APE by turbulent diffusion and other processes. Estimating these two parameters for coupled models reveals potential deficiencies (and large differences) in how state-of-the-art coupled GCMs reproduce the ocean energetics as compared to ocean-only models and data assimilating models. The majority of the coupled models we analyzed show a lower efficiency (values of c in the range of 10–50% versus 50–60% for ocean-only simulations or reanalysis) and a relatively strong energy damping (values of a-1 in the range 0.4–1 years versus 0.9–1.2 years). These differences in the model energetics appear to reflect differences in the simulated thermal structure of the tropical ocean, the structure of ocean equatorial currents, and deficiencies in the way coupled models simulate ENSO.