904 resultados para Single-process Models
Resumo:
The main objective of this study was to find out the bases for innovation model formulation in an existing organization based on cases. Innovation processes can be analyzed based on their needs and based on their emphasis on the business model development or R&D. The research was conducted in energy sector within one company by utilizing its projects as cases for the study. It is typical for the field of business that development is slow, although the case company has put emphasis on its innovation efforts. Analysis was done by identifying the cases’ needs and comparing them. The results were that because of the variances in the needs of the cases, the applicability of innovation process models varies. It was discovered that by dividing the process into two phases, a uniform model could be composed. This model would fulfill the needs of the cases and potential future projects as well.
Resumo:
Työn teoriaosuudessa tutkittiin prosessien uudelleen suunnittelua, prosessien mallintamista sekä prosessimittariston rakentamista. Työn tavoitteena oli uudelleen suunnitella organisaation sertifiointiprosessi. Tämän tavoitteen saavuttamiseksi piti mallintaa nykyinen ja uusi prosessi sekä rakentaa mittaristo, joka antaisi organisaatiolle arvokasta tietoa siitä, kuinka tehokkaasti uusi prosessi toimii. Työ suoritettiin osallistuvana toimintatutkimuksena. Diplomityön tekijä oli toiminut kohdeorganisaatiossa työntekijänä jo useita vuosia ja pystyi näinollen hyödyntämään omaa tietämystään sekä nykyisen prosessin mallintamisessa, että uuden prosessin suunnittelussa. Työn tuloksena syntyi uusi sertifiointiprosessi, joka on karsitumpi ja tehokkaampi kuin edeltäjänsä. Uusi mittaristojärjestelmä rakennettiin, jota organisaation johto kykenisi seuraamaan prosessin sidosryhmien tehokkuutta sekä tuotteiden laadun kehitystä. Sivutuotteena organisaatio sai käyttöönsä yksityiskohtaiset prosessikuvaukset, joita voidaan hyödyntää koulutusmateriaalina uutta henkilöstöä rekrytoitaessa sekä informatiivisena työkaluna esiteltäessä prosessia virallisille sertifiointitahoille.
Resumo:
Passive solar building design is the process of designing a building while considering sunlight exposure for receiving heat in winter and rejecting heat in summer. The main goal of a passive solar building design is to remove or reduce the need of mechanical and electrical systems for cooling and heating, and therefore saving energy costs and reducing environmental impact. This research will use evolutionary computation to design passive solar buildings. Evolutionary design is used in many research projects to build 3D models for structures automatically. In this research, we use a mixture of split grammar and string-rewriting for generating new 3D structures. To evaluate energy costs, the EnergyPlus system is used. This is a comprehensive building energy simulation system, which will be used alongside the genetic programming system. In addition, genetic programming will also consider other design and geometry characteristics of the building as search objectives, for example, window placement, building shape, size, and complexity. In passive solar designs, reducing energy that is needed for cooling and heating are two objectives of interest. Experiments show that smaller buildings with no windows and skylights are the most energy efficient models. Window heat gain is another objective used to encourage models to have windows. In addition, window and volume based objectives are tried. To examine the impact of environment on designs, experiments are run on five different geographic locations. Also, both single floor models and multi-floor models are examined in this research. According to the experiments, solutions from the experiments were consistent with respect to materials, sizes, and appearance, and satisfied problem constraints in all instances.
Resumo:
In this paper, we develop finite-sample inference procedures for stationary and nonstationary autoregressive (AR) models. The method is based on special properties of Markov processes and a split-sample technique. The results on Markovian processes (intercalary independence and truncation) only require the existence of conditional densities. They are proved for possibly nonstationary and/or non-Gaussian multivariate Markov processes. In the context of a linear regression model with AR(1) errors, we show how these results can be used to simplify the distributional properties of the model by conditioning a subset of the data on the remaining observations. This transformation leads to a new model which has the form of a two-sided autoregression to which standard classical linear regression inference techniques can be applied. We show how to derive tests and confidence sets for the mean and/or autoregressive parameters of the model. We also develop a test on the order of an autoregression. We show that a combination of subsample-based inferences can improve the performance of the procedure. An application to U.S. domestic investment data illustrates the method.
Resumo:
In a recent paper, Bai and Perron (1998) considered theoretical issues related to the limiting distribution of estimators and test statistics in the linear model with multiple structural changes. In this companion paper, we consider practical issues for the empirical applications of the procedures. We first address the problem of estimation of the break dates and present an efficient algorithm to obtain global minimizers of the sum of squared residuals. This algorithm is based on the principle of dynamic programming and requires at most least-squares operations of order O(T 2) for any number of breaks. Our method can be applied to both pure and partial structural-change models. Secondly, we consider the problem of forming confidence intervals for the break dates under various hypotheses about the structure of the data and the errors across segments. Third, we address the issue of testing for structural changes under very general conditions on the data and the errors. Fourth, we address the issue of estimating the number of breaks. We present simulation results pertaining to the behavior of the estimators and tests in finite samples. Finally, a few empirical applications are presented to illustrate the usefulness of the procedures. All methods discussed are implemented in a GAUSS program available upon request for non-profit academic use.
Resumo:
This note investigates the adequacy of the finite-sample approximation provided by the Functional Central Limit Theorem (FCLT) when the errors are allowed to be dependent. We compare the distribution of the scaled partial sums of some data with the distribution of the Wiener process to which it converges. Our setup is purposely very simple in that it considers data generated from an ARMA(1,1) process. Yet, this is sufficient to bring out interesting conclusions about the particular elements which cause the approximations to be inadequate in even quite large sample sizes.
Resumo:
We examine the relationship between the risk premium on the S&P 500 index return and its conditional variance. We use the SMEGARCH - Semiparametric-Mean EGARCH - model in which the conditional variance process is EGARCH while the conditional mean is an arbitrary function of the conditional variance. For monthly S&P 500 excess returns, the relationship between the two moments that we uncover is nonlinear and nonmonotonic. Moreover, we find considerable persistence in the conditional variance as well as a leverage effect, as documented by others. Moreover, the shape of these relationships seems to be relatively stable over time.
Resumo:
We discuss and test the potential usefulness of single-column models (SCMs) for the testing of stchastic physics schemes that have been proposed for use in general circulation models (GCMs). We argue that although single column tests cannot be definitive in exposing the full behaviour of a stochastic method in the full GCM, and although there are differences between SCM testing of deterministic and stochastic methods, nonetheless SCM testing remains a useful tool. It is necessary to consider an ensemble of SCM runs produced by the stochastic method. These can be usefully compared to deterministic ensembles describing initial condition uncertainty and also to combinations of these (with structural model changes) into poor man's ensembles. The proposed methodology is demonstrated using an SCM experiment recently developed by the GCSS community, simulating the transitions between active and suppressed periods of tropical convection.
Resumo:
We discuss and test the potential usefulness of single-column models (SCMs) for the testing of stochastic physics schemes that have been proposed for use in general circulation models (GCMs). We argue that although single column tests cannot be definitive in exposing the full behaviour of a stochastic method in the full GCM, and although there are differences between SCM testing of deterministic and stochastic methods, SCM testing remains a useful tool. It is necessary to consider an ensemble of SCM runs produced by the stochastic method. These can be usefully compared to deterministic ensembles describing initial condition uncertainty and also to combinations of these (with structural model changes) into poor man's ensembles. The proposed methodology is demonstrated using an SCM experiment recently developed by the GCSS (GEWEX Cloud System Study) community, simulating transitions between active and suppressed periods of tropical convection.
Resumo:
Single-column models (SCM) are useful test beds for investigating the parameterization schemes of numerical weather prediction and climate models. The usefulness of SCM simulations are limited, however, by the accuracy of the best estimate large-scale observations prescribed. Errors estimating the observations will result in uncertainty in modeled simulations. One method to address the modeled uncertainty is to simulate an ensemble where the ensemble members span observational uncertainty. This study first derives an ensemble of large-scale data for the Tropical Warm Pool International Cloud Experiment (TWP-ICE) based on an estimate of a possible source of error in the best estimate product. These data are then used to carry out simulations with 11 SCM and two cloud-resolving models (CRM). Best estimate simulations are also performed. All models show that moisture-related variables are close to observations and there are limited differences between the best estimate and ensemble mean values. The models, however, show different sensitivities to changes in the forcing particularly when weakly forced. The ensemble simulations highlight important differences in the surface evaporation term of the moisture budget between the SCM and CRM. Differences are also apparent between the models in the ensemble mean vertical structure of cloud variables, while for each model, cloud properties are relatively insensitive to forcing. The ensemble is further used to investigate cloud variables and precipitation and identifies differences between CRM and SCM particularly for relationships involving ice. This study highlights the additional analysis that can be performed using ensemble simulations and hence enables a more complete model investigation compared to using the more traditional single best estimate simulation only.
Resumo:
A dynamical wind-wave climate simulation covering the North Atlantic Ocean and spanning the whole 21st century under the A1B scenario has been compared with a set of statistical projections using atmospheric variables or large scale climate indices as predictors. As a first step, the performance of all statistical models has been evaluated for the present-day climate; namely they have been compared with a dynamical wind-wave hindcast in terms of winter Significant Wave Height (SWH) trends and variance as well as with altimetry data. For the projections, it has been found that statistical models that use wind speed as independent variable predictor are able to capture a larger fraction of the winter SWH inter-annual variability (68% on average) and of the long term changes projected by the dynamical simulation. Conversely, regression models using climate indices, sea level pressure and/or pressure gradient as predictors, account for a smaller SWH variance (from 2.8% to 33%) and do not reproduce the dynamically projected long term trends over the North Atlantic. Investigating the wind-sea and swell components separately, we have found that the combination of two regression models, one for wind-sea waves and another one for the swell component, can improve significantly the wave field projections obtained from single regression models over the North Atlantic.
Resumo:
The degree to which habitat fragmentation affects bird incidence is species specific and may depend on varying spatial scales. Selecting the correct scale of measurement is essential to appropriately assess the effects of habitat fragmentation on bird occurrence. Our objective was to determine which spatial scale of landscape measurement best describes the incidence of three bird species (Pyriglena leucoptera, Xiphorhynchus fuscus and Chiroxiphia caudata) in the fragmented Brazilian Atlantic forest and test if multi-scalar models perform better than single-scalar ones. Bird incidence was assessed in 80 forest fragments. The surrounding landscape structure was described with four indices measured at four spatial scales (400-, 600-, 800- and 1,000-m buffers around the sample points). The explanatory power of each scale in predicting bird incidence was assessed using logistic regression, bootstrapped with 1,000 repetitions. The best results varied between species (1,000-m radius for P. leucoptera; 800-m for X. fuscus and 600-m for C. caudata), probably due to their distinct feeding habits and foraging strategies. Multi-scale models always resulted in better predictions than single-scale models, suggesting that different aspects of the landscape structure are related to different ecological processes influencing bird incidence. In particular, our results suggest that local extinction and (re)colonisation processes might simultaneously act at different scales. Thus, single-scale models may not be good enough to properly describe complex pattern-process relationships. Selecting variables at multiple ecologically relevant scales is a reasonable procedure to optimise the accuracy of species incidence models.
Resumo:
Internet protocol TV (IPTV) is predicted to be the key technology winner in the future. Efforts to accelerate the deployment of IPTV centralized model which is combined of VHO, encoders, controller, access network and Home network. Regardless of whether the network is delivering live TV, VOD, or Time-shift TV, all content and network traffic resulting from subscriber requests must traverse the entire network from the super-headend all the way to each subscriber's Set-Top Box (STB).IPTV services require very stringent QoS guarantees When IPTV traffic shares the network resources with other traffic like data and voice, how to ensure their QoS and efficiently utilize the network resources is a key and challenging issue. For QoS measured in the network-centric terms of delay jitter, packet losses and bounds on delay. The main focus of this thesis is on the optimized bandwidth allocation and smooth datatransmission. The proposed traffic model for smooth delivering video service IPTV network with its QoS performance evaluation. According to Maglaris et al [5] First, analyze the coding bit rate of a single video source. Various statistical quantities are derived from bit rate data collected with a conditional replenishment inter frame coding scheme. Two correlated Markov process models (one in discrete time and one incontinuous time) are shown to fit the experimental data and are used to model the input rates of several independent sources into a statistical multiplexer. Preventive control mechanism which is to be include CAC, traffic policing used for traffic control.QoS has been evaluated of common bandwidth scheduler( FIFO) by use fluid models with Markovian queuing method and analysis the result by using simulator andanalytically, Which is measured the performance of the packet loss, overflow and mean waiting time among the network users.
Resumo:
Internet protocol TV (IPTV) is predicted to be the key technology winner in the future. Efforts to accelerate the deployment of IPTV centralized model which is combined of VHO, encoders, controller, access network and Home network. Regardless of whether the network is delivering live TV, VOD, or Time-shift TV, all content and network traffic resulting from subscriber requests must traverse the entire network from the super-headend all the way to each subscriber's Set-Top Box (STB). IPTV services require very stringent QoS guarantees When IPTV traffic shares the network resources with other traffic like data and voice, how to ensure their QoS and efficiently utilize the network resources is a key and challenging issue. For QoS measured in the network-centric terms of delay jitter, packet losses and bounds on delay. The main focus of this thesis is on the optimized bandwidth allocation and smooth data transmission. The proposed traffic model for smooth delivering video service IPTV network with its QoS performance evaluation. According to Maglaris et al [5] first, analyze the coding bit rate of a single video source. Various statistical quantities are derived from bit rate data collected with a conditional replenishment inter frame coding scheme. Two correlated Markov process models (one in discrete time and one in continuous time) are shown to fit the experimental data and are used to model the input rates of several independent sources into a statistical multiplexer. Preventive control mechanism which is to be including CAC, traffic policing used for traffic control. QoS has been evaluated of common bandwidth scheduler( FIFO) by use fluid models with Markovian queuing method and analysis the result by using simulator and analytically, Which is measured the performance of the packet loss, overflow and mean waiting time among the network users.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)