945 resultados para Data modeling
Resumo:
IEEE 802.16 standard specifies a contention based bandwidth request scheme for best-effort and non-real time polling services in Point-to-MultiPoint (PMP) architecture. In this letter we propose an analytical model for the scheme and study how the performances of bandwidth efficiency and channel access delay change with the contention window size, the number of contending subscriber stations, the number of slots allocated for bandwidth request and data transmission. Simulations validate its high accuracy. © 2007 IEEE.
Resumo:
The heterogeneously catalyzed transesterification reaction for the production of biodiesel from triglycerides was investigated for reaction mechanism and kinetic constants. Three elementary reaction mechanisms Eley-Rideal (ER), Langmuir-Hinshelwood-Hougen-Watson (LHHW), and Hattori with assumptions, such as quasi-steady-state conditions for the surface species and methanol adsorption, and surface reactions as the rate-determining steps were applied to predict the catalyst surface coverage and the bulk concentration using a multiscale simulation framework. The rate expression based on methanol adsorption as the rate limiting in LHHW elementary mechanism has been found to be statistically the most reliable representation of the experimental data using hydrotalcite catalyst with different formulations. © 2011 American Chemical Society.
Resumo:
Construction customers are persistently seeking to achieve sustainability and maximize value as sustainability has become a major consideration in the construction industry. In particular, it is essential to refurbish a whole house to achieve the sustainability agenda of 80% CO2 reduction by 2050 as the housing sector accounts for 28% of the total UK CO2 emission. However, whole house refurbishment seems to be challenging due to the highly fragmented nature of construction practice, which makes the integration of diverse information throughout the project lifecycle difficult. Consequently, Building Information Modeling (BIM) is becoming increasingly difficult to ignore in order to manage construction projects in a collaborative manner, although the current uptake of the housing sector is low at 25%. This research aims to investigate homeowners’ decision making factors for housing refurbishment projects and to provide a valuable dataset as an essential input to BIM for such projects. One-hundred and twelve homeowners and 39 construction professionals involved in UK housing refurbishment were surveyed. It was revealed that homeowners value initial cost more while construction professionals value thermal performance. The results supported that homeowners and professionals both considered the first priority to be roof refurbishment. This research revealed that BIM requires a proper BIM dataset and objects for housing refurbishment.
Resumo:
Determination of the so-called optical constants (complex refractive index N, which is usually a function of the wavelength, and physical thickness D) of thin films from experimental data is a typical inverse non-linear problem. It is still a challenge to the scientific community because of the complexity of the problem and its basic and technological significance in optics. Usually, solutions are looked for models with 3-10 parameters. Best estimates of these parameters are obtained by minimization procedures. Herein, we discuss the choice of orthogonal polynomials for the dispersion law of the thin film refractive index. We show the advantage of their use, compared to the Selmeier, Lorentz or Cauchy models.
Resumo:
This paper focuses on the development of methods and cascade of models for flood monitoring and forecasting and its implementation in Grid environment. The processing of satellite data for flood extent mapping is done using neural networks. For flood forecasting we use cascade of models: regional numerical weather prediction (NWP) model, hydrological model and hydraulic model. Implementation of developed methods and models in the Grid infrastructure and related projects are discussed.
Resumo:
questions of forming of learning sets for artificial neural networks in problems of lossless data compression are considered. Methods of construction and use of learning sets are studied. The way of forming of learning set during training an artificial neural network on the data stream is offered.
Resumo:
The polyparametric intelligence information system for diagnostics human functional state in medicine and public health is developed. The essence of the system consists in polyparametric describing of human functional state with the unified set of physiological parameters and using the polyparametric cognitive model developed as the tool for a system analysis of multitude data and diagnostics of a human functional state. The model is developed on the basis of general principles geometry and symmetry by algorithms of artificial intelligence systems. The architecture of the system is represented. The model allows analyzing traditional signs - absolute values of electrophysiological parameters and new signs generated by the model – relationships of ones. The classification of physiological multidimensional data is made with a transformer of the model. The results are presented to a physician in a form of visual graph – a pattern individual functional state. This graph allows performing clinical syndrome analysis. A level of human functional state is defined in the case of the developed standard (“ideal”) functional state. The complete formalization of results makes it possible to accumulate physiological data and to analyze them by mathematics methods.
Resumo:
Formal grammars can used for describing complex repeatable structures such as DNA sequences. In this paper, we describe the structural composition of DNA sequences using a context-free stochastic L-grammar. L-grammars are a special class of parallel grammars that can model the growth of living organisms, e.g. plant development, and model the morphology of a variety of organisms. We believe that parallel grammars also can be used for modeling genetic mechanisms and sequences such as promoters. Promoters are short regulatory DNA sequences located upstream of a gene. Detection of promoters in DNA sequences is important for successful gene prediction. Promoters can be recognized by certain patterns that are conserved within a species, but there are many exceptions which makes the promoter recognition a complex problem. We replace the problem of promoter recognition by induction of context-free stochastic L-grammar rules, which are later used for the structural analysis of promoter sequences. L-grammar rules are derived automatically from the drosophila and vertebrate promoter datasets using a genetic programming technique and their fitness is evaluated using a Support Vector Machine (SVM) classifier. The artificial promoter sequences generated using the derived L- grammar rules are analyzed and compared with natural promoter sequences.
Resumo:
The possibility to analyze, quantify and forecast epidemic outbreaks is fundamental when devising effective disease containment strategies. Policy makers are faced with the intricate task of drafting realistically implementable policies that strike a balance between risk management and cost. Two major techniques policy makers have at their disposal are: epidemic modeling and contact tracing. Models are used to forecast the evolution of the epidemic both globally and regionally, while contact tracing is used to reconstruct the chain of people who have been potentially infected, so that they can be tested, isolated and treated immediately. However, both techniques might provide limited information, especially during an already advanced crisis when the need for action is urgent. In this paper we propose an alternative approach that goes beyond epidemic modeling and contact tracing, and leverages behavioral data generated by mobile carrier networks to evaluate contagion risk on a per-user basis. The individual risk represents the loss incurred by not isolating or treating a specific person, both in terms of how likely it is for this person to spread the disease as well as how many secondary infections it will cause. To this aim, we develop a model, named Progmosis, which quantifies this risk based on movement and regional aggregated statistics about infection rates. We develop and release an open-source tool that calculates this risk based on cellular network events. We simulate a realistic epidemic scenarios, based on an Ebola virus outbreak; we find that gradually restricting the mobility of a subset of individuals reduces the number of infected people after 30 days by 24%.
Resumo:
A dolgozatban az ellátási láncokban meglévő diadikus kapcsolatok minőségét állítjuk a vizsgálatok középpontjába. Az irodalomban számtalan megközelítés ismert az ellátási lánc kapcsolatok fejlődésének leírására. Ezen fejlődési elméletek inkább elméleti szinten írják le a diadikus kapcsolatok változását, annak empirikus tesztelhetőségét nem vizsgálják. Dolgozatunkban kísérletet teszünk az ellátási lánc kapcsolatok fejlődésének empirikus vizsgálatára. Arra próbálunk választ találni, hogy az életciklus hipotézis az üzleti kapcsolatok időbeli fejlődésére alkalmazható-e. = Our paper combines two approaches using data of an internet based questionnaire and applying quantitative analysis it tests the hypothesis business relationship development in time can be described with the concept of life cycle. The concept of life cycle is widely used in business research. Among others the diffusion of innovation is described using this concept, or the concept of product life cycle just to name a few. All of these researches analyze the life cycle along a specific variable (for example the volume of sales or revenue in case of the product life cycle) which (except the last stage of the cycle, the decline) has a cumulative character resulting in the widely known specific shape of a life cycle. Consequently testing a life cycle hypothesis inevitably means the acceptance of some type cumulativity in the development.
Resumo:
Regional climate models (RCMs) provide reliable climatic predictions for the next 90 years with high horizontal and temporal resolution. In the 21st century northward latitudinal and upward altitudinal shift of the distribution of plant species and phytogeographical units is expected. It is discussed how the modeling of phytogeographical unit can be reduced to modeling plant distributions. Predicted shift of the Moesz line is studied as case study (with three different modeling approaches) using 36 parameters of REMO regional climate data-set, ArcGIS geographic information software, and periods of 1961-1990 (reference period), 2011-2040, and 2041-2070. The disadvantages of this relatively simple climate envelope modeling (CEM) approach are then discussed and several ways of model improvement are suggested. Some statistical and artificial intelligence (AI) methods (logistic regression, cluster analysis and other clustering methods, decision tree, evolutionary algorithm, artificial neural network) are able to provide development of the model. Among them artificial neural networks (ANN) seems to be the most suitable algorithm for this purpose, which provides a black box method for distribution modeling.
Resumo:
Ecological models have often been used in order to answer questions that are in the limelight of recent researches such as the possible effects of climate change. The methodology of tactical models is a very useful tool comparison to those complex models requiring relatively large set of input parameters. In this study, a theoretical strategic model (TEGM ) was adapted to the field data on the basis of a 24-year long monitoring database of phytoplankton in the Danube River at the station of G¨od, Hungary (at 1669 river kilometer – hereafter referred to as “rkm”). The Danubian Phytoplankton Growth Model (DPGM) is able to describe the seasonal dynamics of phytoplankton biomass (mg L−1) based on daily temperature, but takes the availability of light into consideration as well. In order to improve fitting, the 24-year long database was split in two parts in accordance with environmental sustainability. The period of 1979–1990 has a higher level of nutrient excess compared with that of the 1991–2002. The authors assume that, in the above-mentioned periods, phytoplankton responded to temperature in two different ways, thus two submodels were developed, DPGM-sA and DPGMsB. Observed and simulated data correlated quite well. Findings suggest that linear temperature rise brings drastic change to phytoplankton only in case of high nutrient load and it is mostly realized through the increase of yearly total biomass.
Resumo:
In the years 2004 and 2005, we collected samples of phytoplankton, zooplankton, and macroinvertebrates in an artificial small pond in Budapest (Hungary). We set up a simulation model predicting the abundances of the cyclopoids, Eudiaptomus zachariasi, and Ischnura pumilio by considering only temperature and the abundance of population of the previous day. Phytoplankton abundance was simulated by considering not only temperature but the abundances of the three mentioned groups. When we ran the model with the data series of internationally accepted climate change scenarios, the different outcomes were discussed. Comparative assessment of the alternative climate change scenarios was also carried out with statistical methods.
Resumo:
The potential future distribution of four Mediterranean pines was aimed to be modeled supported by EUFORGEN digital area database (distribution maps), ESRI ArcGIS 10 software’s Spatial Analyst module (modeling environment), PAST (calibration of the model with statistical method), and REMO regional climate model (climatic data). The studied species were Pinus brutia, Pinus halepensis, Pinus pinaster, and Pinus pinea. The climate data were available in a 25 km resolution grid for the reference period (1961-90) and two future periods (2011-40, 2041-70). The climate model was based on the IPCC SRES A1B scenario. The model results show explicit shift of the distributions to the north in case of three of the four studied species. The future (2041-70) climate of Western Hungary seems to be suitable for Pinus pinaster.
Resumo:
This paper is about the development and the application of an ESRI ArcGIS tool which implements multi-layer, feed-forward artificial neural network (ANN) to study the climate envelope of species. The supervised learning is achieved by backpropagation algorithm. Based on the distribution and the grids of the climate (and edaphic data) of the reference and future periods the tool predicts the future potential distribution of the studied species. The trained network can be saved and loaded. A modeling result based on the distribution of European larch (Larix decidua Mill.) is presented as a case study.