950 resultados para Decomposition of Ranked Models
Resumo:
Tämän tutkimuksen päätavoitteena oli selvittää, millaiset liiketoimintamallit soveltuvat mobiilin internet-liiketoiminnan harjoittamiseen kehittyvillä markkinoilla. Tavoitteena oli myös selvittää tekijöitä, jotka vaikuttavat mobiilin internetin diffuusioon. Tutkimus tehtiin käyttäen sekä kvantitatiivista että kvalitatiivista tutkimusmenetelmää. Klusterianalyysin avulla 40 Euroopan maasta muodostettiin sisäisesti homogeenisiä maaklustereita. Näiden klustereiden avulla oli mahdollista suunnitella erityyppisille markkinoille soveltuvat liiketoimintamallit. Haastatteluissa selvitettiin asiantuntijoiden näkemyksiä tekijöistä, jotka vaikuttavat mobiilin internetin diffuusioon kehittyvillä markkinoilla. Tutkimuksessa saatiin selville, että tärkeimmät liiketoimintamallin elementit kehittyvillä markkinoilla ovat hinnoittelu, arvotarjooma ja arvoverkko. Puutteellisen kiinteän verkon todettiin olevan yksi tärkeimmistä mobiilin internetin diffuusiota edistävistä tekijöistä kehittyvillä markkinoilla.
Resumo:
The human connectome represents a network map of the brain's wiring diagram and the pattern into which its connections are organized is thought to play an important role in cognitive function. The generative rules that shape the topology of the human connectome remain incompletely understood. Earlier work in model organisms has suggested that wiring rules based on geometric relationships (distance) can account for many but likely not all topological features. Here we systematically explore a family of generative models of the human connectome that yield synthetic networks designed according to different wiring rules combining geometric and a broad range of topological factors. We find that a combination of geometric constraints with a homophilic attachment mechanism can create synthetic networks that closely match many topological characteristics of individual human connectomes, including features that were not included in the optimization of the generative model itself. We use these models to investigate a lifespan dataset and show that, with age, the model parameters undergo progressive changes, suggesting a rebalancing of the generative factors underlying the connectome across the lifespan.
Resumo:
Adoptive cell transfer using engineered T cells is emerging as a promising treatment for metastatic melanoma. Such an approach allows one to introduce T cell receptor (TCR) modifications that, while maintaining the specificity for the targeted antigen, can enhance the binding and kinetic parameters for the interaction with peptides (p) bound to major histocompatibility complexes (MHC). Using the well-characterized 2C TCR/SIYR/H-2K(b) structure as a model system, we demonstrated that a binding free energy decomposition based on the MM-GBSA approach provides a detailed and reliable description of the TCR/pMHC interactions at the structural and thermodynamic levels. Starting from this result, we developed a new structure-based approach, to rationally design new TCR sequences, and applied it to the BC1 TCR targeting the HLA-A2 restricted NY-ESO-1157-165 cancer-testis epitope. Fifty-four percent of the designed sequence replacements exhibited improved pMHC binding as compared to the native TCR, with up to 150-fold increase in affinity, while preserving specificity. Genetically engineered CD8(+) T cells expressing these modified TCRs showed an improved functional activity compared to those expressing BC1 TCR. We measured maximum levels of activities for TCRs within the upper limit of natural affinity, K D = ∼1 - 5 μM. Beyond the affinity threshold at K D < 1 μM we observed an attenuation in cellular function, in line with the "half-life" model of T cell activation. Our computer-aided protein-engineering approach requires the 3D-structure of the TCR-pMHC complex of interest, which can be obtained from X-ray crystallography. We have also developed a homology modeling-based approach, TCRep 3D, to obtain accurate structural models of any TCR-pMHC complexes when experimental data is not available. Since the accuracy of the models depends on the prediction of the TCR orientation over pMHC, we have complemented the approach with a simplified rigid method to predict this orientation and successfully assessed it using all non-redundant TCR-pMHC crystal structures available. These methods potentially extend the use of our TCR engineering method to entire TCR repertoires for which no X-ray structure is available. We have also performed a steered molecular dynamics study of the unbinding of the TCR-pMHC complex to get a better understanding of how TCRs interact with pMHCs. This entire rational TCR design pipeline is now being used to produce rationally optimized TCRs for adoptive cell therapies of stage IV melanoma.
Resumo:
In this paper, we obtain sharp asymptotic formulas with error estimates for the Mellin con- volution of functions de ned on (0;1), and use these formulas to characterize the asymptotic behavior of marginal distribution densities of stock price processes in mixed stochastic models. Special examples of mixed models are jump-di usion models and stochastic volatility models with jumps. We apply our general results to the Heston model with double exponential jumps, and make a detailed analysis of the asymptotic behavior of the stock price density, the call option pricing function, and the implied volatility in this model. We also obtain similar results for the Heston model with jumps distributed according to the NIG law.
Resumo:
This thesis concentrates on developing a practical local approach methodology based on micro mechanical models for the analysis of ductile fracture of welded joints. Two major problems involved in the local approach, namely the dilational constitutive relation reflecting the softening behaviour of material, and the failure criterion associated with the constitutive equation, have been studied in detail. Firstly, considerable efforts were made on the numerical integration and computer implementation for the non trivial dilational Gurson Tvergaard model. Considering the weaknesses of the widely used Euler forward integration algorithms, a family of generalized mid point algorithms is proposed for the Gurson Tvergaard model. Correspondingly, based on the decomposition of stresses into hydrostatic and deviatoric parts, an explicit seven parameter expression for the consistent tangent moduli of the algorithms is presented. This explicit formula avoids any matrix inversion during numerical iteration and thus greatly facilitates the computer implementation of the algorithms and increase the efficiency of the code. The accuracy of the proposed algorithms and other conventional algorithms has been assessed in a systematic manner in order to highlight the best algorithm for this study. The accurate and efficient performance of present finite element implementation of the proposed algorithms has been demonstrated by various numerical examples. It has been found that the true mid point algorithm (a = 0.5) is the most accurate one when the deviatoric strain increment is radial to the yield surface and it is very important to use the consistent tangent moduli in the Newton iteration procedure. Secondly, an assessment of the consistency of current local failure criteria for ductile fracture, the critical void growth criterion, the constant critical void volume fraction criterion and Thomason's plastic limit load failure criterion, has been made. Significant differences in the predictions of ductility by the three criteria were found. By assuming the void grows spherically and using the void volume fraction from the Gurson Tvergaard model to calculate the current void matrix geometry, Thomason's failure criterion has been modified and a new failure criterion for the Gurson Tvergaard model is presented. Comparison with Koplik and Needleman's finite element results shows that the new failure criterion is fairly accurate indeed. A novel feature of the new failure criterion is that a mechanism for void coalescence is incorporated into the constitutive model. Hence the material failure is a natural result of the development of macroscopic plastic flow and the microscopic internal necking mechanism. By the new failure criterion, the critical void volume fraction is not a material constant and the initial void volume fraction and/or void nucleation parameters essentially control the material failure. This feature is very desirable and makes the numerical calibration of void nucleation parameters(s) possible and physically sound. Thirdly, a local approach methodology based on the above two major contributions has been built up in ABAQUS via the user material subroutine UMAT and applied to welded T joints. By using the void nucleation parameters calibrated from simple smooth and notched specimens, it was found that the fracture behaviour of the welded T joints can be well predicted using present methodology. This application has shown how the damage parameters of both base material and heat affected zone (HAZ) material can be obtained in a step by step manner and how useful and capable the local approach methodology is in the analysis of fracture behaviour and crack development as well as structural integrity assessment of practical problems where non homogeneous materials are involved. Finally, a procedure for the possible engineering application of the present methodology is suggested and discussed.
Resumo:
The aim of this work is to compare two families of mathematical models for their respective capability to capture the statistical properties of real electricity spot market time series. The first model family is ARMA-GARCH models and the second model family is mean-reverting Ornstein-Uhlenbeck models. These two models have been applied to two price series of Nordic Nord Pool spot market for electricity namely to the System prices and to the DenmarkW prices. The parameters of both models were calibrated from the real time series. After carrying out simulation with optimal models from both families we conclude that neither ARMA-GARCH models, nor conventional mean-reverting Ornstein-Uhlenbeck models, even when calibrated optimally with real electricity spot market price or return series, capture the statistical characteristics of the real series. But in the case of less spiky behavior (System prices), the mean-reverting Ornstein-Uhlenbeck model could be seen to partially succeeded in this task.
Resumo:
Asian rust of soybean [Glycine max (L.) Merril] is one of the most important fungal diseases of this crop worldwide. The recent introduction of Phakopsora pachyrhizi Syd. & P. Syd in the Americas represents a major threat to soybean production in the main growing regions, and significant losses have already been reported. P. pachyrhizi is extremely aggressive under favorable weather conditions, causing rapid plant defoliation. Epidemiological studies, under both controlled and natural environmental conditions, have been done for several decades with the aim of elucidating factors that affect the disease cycle as a basis for disease modeling. The recent spread of Asian soybean rust to major production regions in the world has promoted new development, testing and application of mathematical models to assess the risk and predict the disease. These efforts have included the integration of new data, epidemiological knowledge, statistical methods, and advances in computer simulation to develop models and systems with different spatial and temporal scales, objectives and audience. In this review, we present a comprehensive discussion on the models and systems that have been tested to predict and assess the risk of Asian soybean rust. Limitations, uncertainties and challenges for modelers are also discussed.
Resumo:
The objective of this dissertation is to improve the dynamic simulation of fluid power circuits. A fluid power circuit is a typical way to implement power transmission in mobile working machines, e.g. cranes, excavators etc. Dynamic simulation is an essential tool in developing controllability and energy-efficient solutions for mobile machines. Efficient dynamic simulation is the basic requirement for the real-time simulation. In the real-time simulation of fluid power circuits there exist numerical problems due to the software and methods used for modelling and integration. A simulation model of a fluid power circuit is typically created using differential and algebraic equations. Efficient numerical methods are required since differential equations must be solved in real time. Unfortunately, simulation software packages offer only a limited selection of numerical solvers. Numerical problems cause noise to the results, which in many cases leads the simulation run to fail. Mathematically the fluid power circuit models are stiff systems of ordinary differential equations. Numerical solution of the stiff systems can be improved by two alternative approaches. The first is to develop numerical solvers suitable for solving stiff systems. The second is to decrease the model stiffness itself by introducing models and algorithms that either decrease the highest eigenvalues or neglect them by introducing steady-state solutions of the stiff parts of the models. The thesis proposes novel methods using the latter approach. The study aims to develop practical methods usable in dynamic simulation of fluid power circuits using explicit fixed-step integration algorithms. In this thesis, twomechanisms whichmake the systemstiff are studied. These are the pressure drop approaching zero in the turbulent orifice model and the volume approaching zero in the equation of pressure build-up. These are the critical areas to which alternative methods for modelling and numerical simulation are proposed. Generally, in hydraulic power transmission systems the orifice flow is clearly in the turbulent area. The flow becomes laminar as the pressure drop over the orifice approaches zero only in rare situations. These are e.g. when a valve is closed, or an actuator is driven against an end stopper, or external force makes actuator to switch its direction during operation. This means that in terms of accuracy, the description of laminar flow is not necessary. But, unfortunately, when a purely turbulent description of the orifice is used, numerical problems occur when the pressure drop comes close to zero since the first derivative of flow with respect to the pressure drop approaches infinity when the pressure drop approaches zero. Furthermore, the second derivative becomes discontinuous, which causes numerical noise and an infinitely small integration step when a variable step integrator is used. A numerically efficient model for the orifice flow is proposed using a cubic spline function to describe the flow in the laminar and transition areas. Parameters for the cubic spline function are selected such that its first derivative is equal to the first derivative of the pure turbulent orifice flow model in the boundary condition. In the dynamic simulation of fluid power circuits, a tradeoff exists between accuracy and calculation speed. This investigation is made for the two-regime flow orifice model. Especially inside of many types of valves, as well as between them, there exist very small volumes. The integration of pressures in small fluid volumes causes numerical problems in fluid power circuit simulation. Particularly in realtime simulation, these numerical problems are a great weakness. The system stiffness approaches infinity as the fluid volume approaches zero. If fixed step explicit algorithms for solving ordinary differential equations (ODE) are used, the system stability would easily be lost when integrating pressures in small volumes. To solve the problem caused by small fluid volumes, a pseudo-dynamic solver is proposed. Instead of integration of the pressure in a small volume, the pressure is solved as a steady-state pressure created in a separate cascade loop by numerical integration. The hydraulic capacitance V/Be of the parts of the circuit whose pressures are solved by the pseudo-dynamic method should be orders of magnitude smaller than that of those partswhose pressures are integrated. The key advantage of this novel method is that the numerical problems caused by the small volumes are completely avoided. Also, the method is freely applicable regardless of the integration routine applied. The superiority of both above-mentioned methods is that they are suited for use together with the semi-empirical modelling method which necessarily does not require any geometrical data of the valves and actuators to be modelled. In this modelling method, most of the needed component information can be taken from the manufacturer’s nominal graphs. This thesis introduces the methods and shows several numerical examples to demonstrate how the proposed methods improve the dynamic simulation of various hydraulic circuits.
Resumo:
Litter fall consists of all organic material deposited on the forest floor, being of extremely important for the structure and maintenance of the ecosystem through nutrient cycling. This study aimed to evaluate the production and decomposition of litter fall in a secondary Atlantic forest fragment of secondary Atlantic Forest, at the Guarapiranga Ecological Park, in São Paulo, SP. The litter samples were taken monthly from May 2012 to May 2013. To assess the contribution of litter fall forty collectors were installed randomly within an area of 0.5 ha. The collected material was sent to the laboratory to be dried at 65 °C for 72 hours, being subsequently separated into fractions of leaves, twigs, reproductive parts and miscellaneous, and weighed to obtain the dry biomass. Litterbags were placed and tied close to the collectors to estimate the decomposition rate in order to evaluate the loss of dry biomass at 30, 60, 90, 120 and 150 days. After collection, the material was sent to the laboratory to be dried and weighed again. Total litter fall throughout the year reached 5.7 Mg.ha-1.yr-1 and the major amount of the material was collected from September till March. Leaves had the major contribution for total litter fall (72%), followed by twigs (14%), reproductive parts (11%) and miscellaneous (3%). Reproductive parts had a peak during the wet season. Positive correlation was observed between total litter and precipitation, temperature and radiation (r = 0.66, p<0.05; r = 0.76, p<0.05; r = 0.58, p<0.05, respectively). The multiple regression showed that precipitation and radiation contributed significantly to litter fall production. Decomposition rate was in the interval expected for secondary tropical forest and was correlated to rainfall. It was concluded that this fragment of secondary forest showed a seasonality effect driven mainly by precipitation and radiation, both important components of foliage renewal for the plant community and that decomposition was in an intermediate rate.
Poultry carcass decomposition and physicochemical analysis of compounds in different Composter types
Resumo:
This study aimed to assess five composter types in poultry carcasses decomposition and to perform a physicochemical analysis of the compounds obtained. Composter types used were six-hole brick, wood, screen, windrow with three PVC pipes with six holes and windrow with three PVC pipes with 10 holes. Composting was followed by four periods using wood shaving like substrate with one bird carcass placed in each composter. Pile turning was performed every 10 days and temperature in each layer was measured on 1st, 7th, 14th, 19th and 29th day, at 3 p.m., as well as room temperature. Temperature during pile turning was also measured at five points per layer and carcass weighing performed to calculate decomposition percentage. Physicochemical parameters evaluated in substrates were moisture, ash, phosphorus, potassium, nitrogen, pH, organic carbon and C/N ratio, up to 30 days. Data were analyzed by repeated measures model, using MIXED method of SAS software. All values of final physicochemical composition of substrates were found according to values of IN-25, except nitrogen. The composter types were efficient in decomposition of poultry carcasses.
Resumo:
The present study was conducted at the Department of Rural Engineering and the Department of Animal Morphology and Physiology of FCAV/Unesp, Jaboticabal, SP, Brazil. The objective was to verify the influence of roof slope, exposure and roofing material on the internal temperature of reduced models of animal production facilities. For the development of the research, 48 reduced and dissemble models with dimensions 1.00 × 1.00 × 0.50 m were used. The roof was shed-type, and the models faced to the North or South directions, with 24 models for each side of exposure. Ceramic, galvanized-steel and fibro tiles were used to build the roofs. Slopes varied between 20, 30, 40 and 50% for the ceramic tile and 10, 30, 40 and 50% for the other two. Inside the models, temperature readings were performed at every hour, for 12 months. The results were evaluated in a general linear model in a nested 3 × 4 × 2 factorial arrangement, in which the effects of roofing material and exposure were nested on the factor Slope. Means were compared by the Tukey test at 5% of probability. After analyzing the data, we observed that with the increase in the slope and exposure to the South, there was a drop in the internal temperature within the model at the geographic coordinates of Jaboticabal city (SP/Brazil).
Resumo:
In the field of anxiety research, animal models are used as screening tools in the search for compounds with therapeutic potential and as simulations for research on mechanisms underlying emotional behaviour. However, a solely pharmacological approach to the validation of such tests has resulted in distinct problems with their applicability to systems other than those involving the benzodiazepine/GABAA receptor complex. In this context, recent developments in our understanding of mammalian defensive behaviour have not only prompted the development of new models but also attempts to refine existing ones. The present review focuses on the application of ethological techniques to one of the most widely used animal models of anxiety, the elevated plus-maze paradigm. This fresh approach to an established test has revealed a hitherto unrecognized multidimensionality to plus-maze behaviour and, as it yields comprehensive behavioural profiles, has many advantages over conventional methodology. This assertion is supported by reference to recent work on the effects of diverse manipulations including psychosocial stress, benzodiazepines, GABA receptor ligands, neurosteroids, 5-HT1A receptor ligands, and panicolytic/panicogenic agents. On the basis of this review, it is suggested that other models of anxiety may well benefit from greater attention to behavioural detail
Resumo:
The purpose of this Master’s thesis was to study the business model development in Finnish newspaper industry during the next then years through scenario planning. The objective was to see how will the business models develop amidst the many changes in the industry, what factors are affecting the change, what are the implications of these changes for the players in the industry and how should the Finnish newspaper companies evolve in order to succeed in the future. In this thesis the business model change is studied based on all the elements of business models, as it was discovered that the industry is too often focusing on changes in only few of those elements and a more broader view can provide valuable information for the companies. The results revealed that the industry is affected by many changes during the next ten years. Scenario planning provides a good tool for analyzing this change and for developing valuable options for businesses. After conducting series of interviews and discovering forces affecting the change, four different scenarios were developed centered on the role that newspaper will take and the level at which they are providing the content in the future. These scenarios indicated that there are varieties of options in the way the business models may develop and that companies should start making decisions proactively in order to succeed. As the business model elements are interdepended, changes made in the other elements will affect the whole model, making these decisions about the role and level of content important for the companies. In the future, it is likely that the Finnish newspaper industry will include many different kinds of business models, some of which can be drastically different from the current ones and some of which can still be similar, but take better into account the new kind of media environment.
Resumo:
Gasification of biomass is an efficient method process to produce liquid fuels, heat and electricity. It is interesting especially for the Nordic countries, where raw material for the processes is readily available. The thermal reactions of light hydrocarbons are a major challenge for industrial applications. At elevated temperatures, light hydrocarbons react spontaneously to form higher molecular weight compounds. In this thesis, this phenomenon was studied by literature survey, experimental work and modeling effort. The literature survey revealed that the change in tar composition is likely caused by the kinetic entropy. The role of the surface material is deemed to be an important factor in the reactivity of the system. The experimental results were in accordance with previous publications on the subject. The novelty of the experimental work lies in the used time interval for measurements combined with an industrially relevant temperature interval. The aspects which are covered in the modeling include screening of possible numerical approaches, testing of optimization methods and kinetic modelling. No significant numerical issues were observed, so the used calculation routines are adequate for the task. Evolutionary algorithms gave a better performance combined with better fit than the conventional iterative methods such as Simplex and Levenberg-Marquardt methods. Three models were fitted on experimental data. The LLNL model was used as a reference model to which two other models were compared. A compact model which included all the observed species was developed. The parameter estimation performed on that model gave slightly impaired fit to experimental data than LLNL model, but the difference was barely significant. The third tested model concentrated on the decomposition of hydrocarbons and included a theoretical description of the formation of carbon layer on the reactor walls. The fit to experimental data was extremely good. Based on the simulation results and literature findings, it is likely that the surface coverage of carbonaceous deposits is a major factor in thermal reactions.
Resumo:
Immunoglobulin E (IgE) and mast cells are believed to play important roles in allergic inflammation. However, their contributions to the pathogenesis of human asthma have not been clearly established. Significant progress has been made recently in our understanding of airway inflammation and airway hyperresponsiveness through studies of murine models of asthma and genetically engineered mice. Some of the studies have provided significant insights into the role of IgE and mast cells in the allergic airway response. In these models mice are immunized systemically with soluble protein antigens and then receive an antigen challenge through the airways. Bronchoalveolar lavage fluid from mice with allergic airway inflammation contains significant amounts of IgE. The IgE can capture the antigen presented to the airways and the immune complexes so formed can augment allergic airway response in a high-affinity IgE receptor (FcepsilonRI)-dependent manner. Previously, there were conflicting reports regarding the role of mast cells in murine models of asthma, based on studies of mast cell-deficient mice. More recent studies have suggested that the extent to which mast cells contribute to murine models of asthma depends on the experimental conditions employed to generate the airway response. This conclusion was further supported by studies using FcepsilonRI-deficient mice. Therefore, IgE-dependent activation of mast cells plays an important role in the development of allergic airway inflammation and airway hyperresponsiveness in mice under specific conditions. The murine models used should be of value for testing inhibitors of IgE or mast cells for the development of therapeutic agents for human asthma.