889 resultados para The Lattice Solid Model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study examines how firms interpret new, potentially disruptive technologies in their own strategic context. The work presents a cross-case analysis of four potentially disruptive technologies or technical operating models: Bluetooth, WLAN, Grid computing and Mobile Peer-to-peer paradigm. The technologies were investigated from the perspective of three mobile operators, a device manufacturer and a software company in the ICT industry. The theoretical background for the study consists of the resource-based view of the firm with dynamic perspective, the theories on the nature of technology and innovations, and the concept of business model. The literature review builds up a propositional framework for estimating the amount of radical change in the companies' business model with two middle variables, the disruptiveness potential of a new technology, and the strategic importance of a new technology to a firm. The data was gathered in group discussion sessions in each company. The results of each case analysis were brought together to evaluate, how firms interpret the potential disruptiveness in terms of changes in product characteristics and added value, technology and market uncertainty, changes in product-market positions, possible competence disruption and changes in value network positions. The results indicate that the perceived disruptiveness in terms ofproduct characteristics does not necessarily translate into strategic importance. In addition, firms did not see the new technologies as a threat in terms of potential competence disruption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The performance of a hydrologic model depends on the rainfall input data, both spatially and temporally. As the spatial distribution of rainfall exerts a great influence on both runoff volumes and peak flows, the use of a distributed hydrologic model can improve the results in the case of convective rainfall in a basin where the storm area is smaller than the basin area. The aim of this study was to perform a sensitivity analysis of the rainfall time resolution on the results of a distributed hydrologic model in a flash-flood prone basin. Within such a catchment, floods are produced by heavy rainfall events with a large convective component. A second objective of the current paper is the proposal of a methodology that improves the radar rainfall estimation at a higher spatial and temporal resolution. Composite radar data from a network of three C-band radars with 6-min temporal and 2 × 2 km2 spatial resolution were used to feed the RIBS distributed hydrological model. A modification of the Window Probability Matching Method (gauge-adjustment method) was applied to four cases of heavy rainfall to improve the observed rainfall sub-estimation by computing new Z/R relationships for both convective and stratiform reflectivities. An advection correction technique based on the cross-correlation between two consecutive images was introduced to obtain several time resolutions from 1 min to 30 min. The RIBS hydrologic model was calibrated using a probabilistic approach based on a multiobjective methodology for each time resolution. A sensitivity analysis of rainfall time resolution was conducted to find the resolution that best represents the hydrological basin behaviour.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Alpine tree-line ecotones are characterized by marked changes at small spatial scales that may result in a variety of physiognomies. A set of alternative individual-based models was tested with data from four contrasting Pinus uncinata ecotones in the central Spanish Pyrenees to reveal the minimal subset of processes required for tree-line formation. A Bayesian approach combined with Markov chain Monte Carlo methods was employed to obtain the posterior distribution of model parameters, allowing the use of model selection procedures. The main features of real tree lines emerged only in models considering nonlinear responses in individual rates of growth or mortality with respect to the altitudinal gradient. Variation in tree-line physiognomy reflected mainly changes in the relative importance of these nonlinear responses, while other processes, such as dispersal limitation and facilitation, played a secondary role. Different nonlinear responses also determined the presence or absence of krummholz, in agreement with recent findings highlighting a different response of diffuse and abrupt or krummholz tree lines to climate change. The method presented here can be widely applied in individual-based simulation models and will turn model selection and evaluation in this type of models into a more transparent, effective, and efficient exercise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Diastrophic dysplasia (DTD) is a recessive chondrodysplasia caused by mutations in SLC26A2, a cell membrane sulfate-chloride antiporter. Sulfate uptake impairment results in low cytosolic sulfate, leading to cartilage proteoglycan (PG) undersulfation. In this work, we used the dtd mouse model to study the role of N-acetyl-l-cysteine (NAC), a well-known drug with antioxidant properties, as an intracellular sulfate source for macromolecular sulfation. Because of the important pre-natal phase of skeletal development and growth, we administered 30 g/l NAC in the drinking water to pregnant mice to explore a possible transplacental effect on the fetuses. When cartilage PG sulfation was evaluated by high-performance liquid chromatography disaccharide analysis in dtd newborn mice, a marked increase in PG sulfation was observed in newborns from NAC-treated pregnancies when compared with the placebo group. Morphometric studies of the femur, tibia and ilium after skeletal staining with alcian blue and alizarin red indicated a partial rescue of abnormal bone morphology in dtd newborns from treated females, compared with pups from untreated females. The beneficial effect of increased macromolecular sulfation was confirmed by chondrocyte proliferation studies in cryosections of the tibial epiphysis by proliferating cell nuclear antigen immunohistochemistry: the percentage of proliferating cells, significantly reduced in the placebo group, reached normal values in dtd newborns from NAC-treated females. In conclusion, NAC is a useful source of sulfate for macromolecular sulfation in vivo when extracellular sulfate supply is reduced, confirming the potential of therapeutic approaches with thiol compounds to improve skeletal deformity and short stature in human DTD and related disorders.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modelling the shoulder's musculature is challenging given its mechanical and geometric complexity. The use of the ideal fibre model to represent a muscle's line of action cannot always faithfully represent the mechanical effect of each muscle, leading to considerable differences between model-estimated and in vivo measured muscle activity. While the musculo-tendon force coordination problem has been extensively analysed in terms of the cost function, only few works have investigated the existence and sensitivity of solutions to fibre topology. The goal of this paper is to present an analysis of the solution set using the concepts of torque-feasible space (TFS) and wrench-feasible space (WFS) from cable-driven robotics. A shoulder model is presented and a simple musculo-tendon force coordination problem is defined. The ideal fibre model for representing muscles is reviewed and the TFS and WFS are defined, leading to the necessary and sufficient conditions for the existence of a solution. The shoulder model's TFS is analysed to explain the lack of anterior deltoid (DLTa) activity. Based on the analysis, a modification of the model's muscle fibre geometry is proposed. The performance with and without the modification is assessed by solving the musculo-tendon force coordination problem for quasi-static abduction in the scapular plane. After the proposed modification, the DLTa reaches 20% of activation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

NlmCategory="UNASSIGNED">A version of cascaded systems analysis was developed specifically with the aim of studying quantum noise propagation in x-ray detectors. Signal and quantum noise propagation was then modelled in four types of x-ray detectors used for digital mammography: four flat panel systems, one computed radiography and one slot-scan silicon wafer based photon counting device. As required inputs to the model, the two dimensional (2D) modulation transfer function (MTF), noise power spectra (NPS) and detective quantum efficiency (DQE) were measured for six mammography systems that utilized these different detectors. A new method to reconstruct anisotropic 2D presampling MTF matrices from 1D radial MTFs measured along different angular directions across the detector is described; an image of a sharp, circular disc was used for this purpose. The effective pixel fill factor for the FP systems was determined from the axial 1D presampling MTFs measured with a square sharp edge along the two orthogonal directions of the pixel lattice. Expectation MTFs were then calculated by averaging the radial MTFs over all possible phases and the 2D EMTF formed with the same reconstruction technique used for the 2D presampling MTF. The quantum NPS was then established by noise decomposition from homogenous images acquired as a function of detector air kerma. This was further decomposed into the correlated and uncorrelated quantum components by fitting the radially averaged quantum NPS with the radially averaged EMTF(2). This whole procedure allowed a detailed analysis of the influence of aliasing, signal and noise decorrelation, x-ray capture efficiency and global secondary gain on NPS and detector DQE. The influence of noise statistics, pixel fill factor and additional electronic and fixed pattern noises on the DQE was also studied. The 2D cascaded model and decompositions performed on the acquired images also enlightened the observed quantum NPS and DQE anisotropy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We apply the cognitive hierarchy model of Camerer et al. (Q J Econ 119(3):861-898, 2004)-where players have different levels of reasoning-to Huck et al. (Games Econ Behav 38:240-264, 2002) discrete version of Hamilton and Slutsky (Games Econ Behav 2:29-46, 1990) action commitment game-a duopoly with endogenous timing of entry. We show that, for an empirically reasonable average number of thinking steps, the model rules out Stackelberg equilibria, generates Cournot outcomes including delay, and outcomes where the first mover commits to a quantity higher than Cournot but lower than Stackelberg leader. We show that a cognitive hierarchy model with quantal responses can explain the most important features of the experimental data on the action commitment game in (2002). In order to gauge the success of the model in fitting the data, we compare it to a noisy Nash model. We find that the cognitive hierarchy model with quantal responses fits the data better than the noisy Nash model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis concentrates on developing a practical local approach methodology based on micro mechanical models for the analysis of ductile fracture of welded joints. Two major problems involved in the local approach, namely the dilational constitutive relation reflecting the softening behaviour of material, and the failure criterion associated with the constitutive equation, have been studied in detail. Firstly, considerable efforts were made on the numerical integration and computer implementation for the non trivial dilational Gurson Tvergaard model. Considering the weaknesses of the widely used Euler forward integration algorithms, a family of generalized mid point algorithms is proposed for the Gurson Tvergaard model. Correspondingly, based on the decomposition of stresses into hydrostatic and deviatoric parts, an explicit seven parameter expression for the consistent tangent moduli of the algorithms is presented. This explicit formula avoids any matrix inversion during numerical iteration and thus greatly facilitates the computer implementation of the algorithms and increase the efficiency of the code. The accuracy of the proposed algorithms and other conventional algorithms has been assessed in a systematic manner in order to highlight the best algorithm for this study. The accurate and efficient performance of present finite element implementation of the proposed algorithms has been demonstrated by various numerical examples. It has been found that the true mid point algorithm (a = 0.5) is the most accurate one when the deviatoric strain increment is radial to the yield surface and it is very important to use the consistent tangent moduli in the Newton iteration procedure. Secondly, an assessment of the consistency of current local failure criteria for ductile fracture, the critical void growth criterion, the constant critical void volume fraction criterion and Thomason's plastic limit load failure criterion, has been made. Significant differences in the predictions of ductility by the three criteria were found. By assuming the void grows spherically and using the void volume fraction from the Gurson Tvergaard model to calculate the current void matrix geometry, Thomason's failure criterion has been modified and a new failure criterion for the Gurson Tvergaard model is presented. Comparison with Koplik and Needleman's finite element results shows that the new failure criterion is fairly accurate indeed. A novel feature of the new failure criterion is that a mechanism for void coalescence is incorporated into the constitutive model. Hence the material failure is a natural result of the development of macroscopic plastic flow and the microscopic internal necking mechanism. By the new failure criterion, the critical void volume fraction is not a material constant and the initial void volume fraction and/or void nucleation parameters essentially control the material failure. This feature is very desirable and makes the numerical calibration of void nucleation parameters(s) possible and physically sound. Thirdly, a local approach methodology based on the above two major contributions has been built up in ABAQUS via the user material subroutine UMAT and applied to welded T joints. By using the void nucleation parameters calibrated from simple smooth and notched specimens, it was found that the fracture behaviour of the welded T joints can be well predicted using present methodology. This application has shown how the damage parameters of both base material and heat affected zone (HAZ) material can be obtained in a step by step manner and how useful and capable the local approach methodology is in the analysis of fracture behaviour and crack development as well as structural integrity assessment of practical problems where non homogeneous materials are involved. Finally, a procedure for the possible engineering application of the present methodology is suggested and discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The model of Questions Answering (Q&A) for eLearning is based on collaborative learning through questions that are posed by students and their answers to that questions which are given by peers, in contrast with the classical model in which students ask questions to the teacher only. In this proposal we extend the Q&A model including the social presence concept and a quantitative measure of it is proposed; besides it is considered the evolution of the resulting Q&A social network after the inclusion of the social presence and taking into account the feedback on questions posed by students and answered by peers. The social network behaviorwas simulated using a Multi-Agent System to compare the proposed social presence model with the classical and the Q&A models

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For an accurate use of pesticide leaching models it is necessary to assess the sensitivity of input parameters. The aim of this work was to carry out sensitivity analysis of the pesticide leaching model PEARL for contrasting soil types of Dourados river watershed in the state of Mato Grosso do Sul, Brazil. Sensitivity analysis was done by carrying out many simulations with different input parameters and calculating their influence on the output values. The approach used was called one-at-a-time sensitivity analysis, which consists in varying independently input parameters one at a time and keeping all others constant with the standard scenario. Sensitivity analysis was automated using SESAN tool that was linked to the PEARL model. Results have shown that only soil characteristics influenced the simulated water flux resulting in none variation of this variable for scenarios with different pesticides and same soil. All input parameters that showed the greatest sensitivity with regard to leached pesticide are related to soil and pesticide properties. Sensitivity of all input parameters was scenario dependent, confirming the need of using more than one standard scenario for sensitivity analysis of pesticide leaching models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computational fluid dynamics (CFD) modeling is an important tool in designing new combustion systems. By using CFD modeling, entire combustion systems can be modeled and the emissions and the performance can be predicted. CFD modeling can also be used to develop new and better combustion systems from an economical and environmental point of view. In CFD modeling of solid fuel combustion, the combustible fuel is generally treated as single fuel particles. One of the limitations with the CFD modeling concerns the sub-models describing the combustion of single fuel particles. Available models in the scientific literature are in many cases not suitable as submodels for CFD modeling since they depend on a large number of input parameters and are computationally heavy. In this thesis CFD-applicable models are developed for the combustion of single fuel particles. The single particle models can be used to improve the combustion performance in various combustion devices or develop completely new technologies. The investigated fields are oxidation of carbon (C) and nitrogen (N) in char residues from solid fuels. Modeled char-C oxidation rates are compared to experimental oxidation rates for a large number of pulverized solid fuel chars under relevant combustion conditions. The experiments have been performed in an isothermal plug flow reactor operating at 1123-1673 K and 3-15 vol.% O2. In the single particle model, the char oxidation is based on apparent kinetics and depends on three fuel specific parameters: apparent pre-exponential factor, apparent activation energy, and apparent reaction order. The single particle model can be incorporated as a sub-model into a CFD code. The results show that the modeled char oxidation rates are in good agreement with experimental char oxidation rates up to around 70% of burnout. Moreover, the results show that the activation energy and the reaction order can be assumed to be constant for a large number of bituminous coal chars under conditions limited by the combined effects of chemical kinetics and pore diffusion. Based on this, a new model based on only one fuel specific parameter is developed (Paper III). The results also show that reaction orders of bituminous coal chars and anthracite chars differ under similar conditions (Paper I and Paper II); reaction orders of bituminous coal chars were found to be one, while reaction orders of anthracite chars were determined to be zero. This difference in reaction orders has not previously been observed in the literature and should be considered in future char oxidation models. One of the most frequently used comprehensive char oxidation models could not explain the difference in the reaction orders. In the thesis (Paper II), a modification to the model is suggested in order to explain the difference in reaction orders between anthracite chars and bituminous coal chars. Two single particle models are also developed for the NO formation and reduction during the oxidation of single biomass char particles. In the models the char-N is assumed to be oxidized to NO and the NO is partly reduced inside the particle. The first model (Paper IV) is based on the concentration gradients of NO inside and outside the particle and the second model is simplified to such an extent that it is based on apparent kinetics and can be incorporated as a sub-model into a CFD code (Paper V). Modeled NO release rates from both models were in good agreement with experimental measurements from a single particle reactor of quartz glass operating at 1173-1323 K and 3-19 vol.% O2. In the future, the models can be used to reduce NO emissions in new combustion systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this Master’s thesis agent-based modeling has been used to analyze maintenance strategy related phenomena. The main research question that has been answered was: what does the agent-based model made for this study tell us about how different maintenance strategy decisions affect profitability of equipment owners and maintenance service providers? Thus, the main outcome of this study is an analysis of how profitability can be increased in industrial maintenance context. To answer that question, first, a literature review of maintenance strategy, agent-based modeling and maintenance modeling and optimization was conducted. This review provided the basis for making the agent-based model. Making the model followed a standard simulation modeling procedure. With the simulation results from the agent-based model the research question was answered. Specifically, the results of the modeling and this study are: (1) optimizing the point in which a machine is maintained increases profitability for the owner of the machine and also the maintainer with certain conditions; (2) time-based pricing of maintenance services leads to a zero-sum game between the parties; (3) value-based pricing of maintenance services leads to a win-win game between the parties, if the owners of the machines share a substantial amount of their value to the maintainers; and (4) error in machine condition measurement is a critical parameter to optimizing maintenance strategy, and there is real systemic value in having more accurate machine condition measurement systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main objective of this work is to analyze the importance of the gas-solid interface transfer of the kinetic energy of the turbulent motion on the accuracy of prediction of the fluid dynamic of Circulating Fluidized Bed (CFB) reactors. CFB reactors are used in a variety of industrial applications related to combustion, incineration and catalytic cracking. In this work a two-dimensional fluid dynamic model for gas-particle flow has been used to compute the porosity, the pressure, and the velocity fields of both phases in 2-D axisymmetrical cylindrical co-ordinates. The fluid dynamic model is based on the two fluid model approach in which both phases are considered to be continuous and fully interpenetrating. CFB processes are essentially turbulent. The model of effective stress on each phase is that of a Newtonian fluid, where the effective gas viscosity was calculated from the standard k-epsilon turbulence model and the transport coefficients of the particulate phase were calculated from the kinetic theory of granular flow (KTGF). This work shows that the turbulence transfer between the phases is very important for a better representation of the fluid dynamics of CFB reactors, especially for systems with internal recirculation and high gradients of particle concentration. Two systems with different characteristics were analyzed. The results were compared with experimental data available in the literature. The results were obtained by using a computer code developed by the authors. The finite volume method with collocated grid, the hybrid interpolation scheme, the false time step strategy and SIMPLEC (Semi-Implicit Method for Pressure Linked Equations - Consistent) algorithm were used to obtain the numerical solution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the present work, liquid-solid flow in industrial scale is modeled using the commercial software of Computational Fluid Dynamics (CFD) ANSYS Fluent 14.5. In literature, there are few studies on liquid-solid flow in industrial scale, but any information about the particular case with modified geometry cannot be found. The aim of this thesis is to describe the strengths and weaknesses of the multiphase models, when a large-scale application is studied within liquid-solid flow, including the boundary-layer characteristics. The results indicate that the selection of the most appropriate multiphase model depends on the flow regime. Thus, careful estimations of the flow regime are recommended to be done before modeling. The computational tool is developed for this purpose during this thesis. The homogeneous multiphase model is valid only for homogeneous suspension, the discrete phase model (DPM) is recommended for homogeneous and heterogeneous suspension where pipe Froude number is greater than 1.0, while the mixture and Eulerian models are able to predict also flow regimes, where pipe Froude number is smaller than 1.0 and particles tend to settle. With increasing material density ratio and decreasing pipe Froude number, the Eulerian model gives the most accurate results, because it does not include simplifications in Navier-Stokes equations like the other models. In addition, the results indicate that the potential location of erosion in the pipe depends on material density ratio. Possible sedimentation of particles can cause erosion and increase pressure drop as well. In the pipe bend, especially secondary flows, perpendicular to the main flow, affect the location of erosion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Volatility has a central role in various theoretical and practical applications in financial markets. These include the applications related to portfolio theory, derivatives pricing and financial risk management. Both theoretical and practical applications require good estimates and forecasts for the asset return volatility. The goal of this study is to examine the forecast performance of one of the more recent volatility measures, model-free implied volatility. Model-free implied volatility is extracted from the prices in the option markets, and it aims to provide an unbiased estimate for the market’s expectation on the future level of volatility. Since it is extracted from the option prices, model-free implied volatility should contain all the relevant information that the market participants have. Moreover, model-free implied volatility requires less restrictive assumptions than the commonly used Black-Scholes implied volatility, which means that it should be less biased estimate for the market’s expectations. Therefore, it should also be a better forecast for the future volatility. The forecast performance of model-free implied volatility is evaluated by comparing it to the forecast performance of Black-Scholes implied volatility and GARCH(1,1) forecast. Weekly forecasts for six years period were calculated for the forecasted variable, German stock market index DAX. The data consisted of price observations for DAX index options. The forecast performance was measured using econometric methods, which aimed to capture the biasedness, accuracy and the information content of the forecasts. The results of the study suggest that the forecast performance of model-free implied volatility is superior to forecast performance of GARCH(1,1) forecast. However, the results also suggest that the forecast performance of model-free implied volatility is not as good as the forecast performance of Black-Scholes implied volatility, which is against the hypotheses based on theory. The results of this study are consistent with the majority of prior research on the subject.