922 resultados para Heckman-type selection models
Resumo:
This paper formulates several mathematical models for determining the optimal sequence of component placements and assignment of component types to feeders simultaneously or the integrated scheduling problem for a type of surface mount technology placement machines, called the sequential pick-andplace (PAP) machine. A PAP machine has multiple stationary feeders storing components, a stationary working table holding a printed circuit board (PCB), and a movable placement head to pick up components from feeders and place them to a board. The objective of integrated problem is to minimize the total distance traveled by the placement head. Two integer nonlinear programming models are formulated first. Then, each of them is equivalently converted into an integer linear type. The models for the integrated problem are verified by two commercial packages. In addition, a hybrid genetic algorithm previously developed by the authors is adopted to solve the models. The algorithm not only generates the optimal solutions quickly for small-sized problems, but also outperforms the genetic algorithms developed by other researchers in terms of total traveling distance.
Resumo:
Most prior new product diffusion (NPD) models do not specifically consider the role of the business model in the process. However, the context of NPD in today's market has been changed dramatically by the introduction of new business models. Through reinterpretation and extension, this paper empirically examines the feasibility of applying Bass-type NPD models to products that are commercialized by different business models. More specifically, the results and analysis of this study consider the subscription business model for service products, the freemium business model for digital products, and a pre-paid and post-paid business model that is widely used by mobile network providers. The paper offers new insights derived from implementing the models in real-life cases. It also highlights three themes for future research.
Resumo:
Resource Selection (or Query Routing) is an important step in P2P IR. Though analogous to document retrieval in the sense of choosing a relevant subset of resources, resource selection methods have evolved independently from those for document retrieval. Among the reasons for such divergence is that document retrieval targets scenarios where underlying resources are semantically homogeneous, whereas peers would manage diverse content. We observe that semantic heterogeneity is mitigated in the clustered 2-tier P2P IR architecture resource selection layer by way of usage of clustering, and posit that this necessitates a re-look at the applicability of document retrieval methods for resource selection within such a framework. This paper empirically benchmarks document retrieval models against the state-of-the-art resource selection models for the problem of resource selection in the clustered P2P IR architecture, using classical IR evaluation metrics. Our benchmarking study illustrates that document retrieval models significantly outperform other methods for the task of resource selection in the clustered P2P IR architecture. This indicates that clustered P2P IR framework can exploit advancements in document retrieval methods to deliver corresponding improvements in resource selection, indicating potential convergence of these fields for the clustered P2P IR architecture.
Resumo:
Estimates of HIV prevalence are important for policy in order to establish the health status of a country's population and to evaluate the effectiveness of population-based interventions and campaigns. However, participation rates in testing for surveillance conducted as part of household surveys, on which many of these estimates are based, can be low. HIV positive individuals may be less likely to participate because they fear disclosure, in which case estimates obtained using conventional approaches to deal with missing data, such as imputation-based methods, will be biased. We develop a Heckman-type simultaneous equation approach which accounts for non-ignorable selection, but unlike previous implementations, allows for spatial dependence and does not impose a homogeneous selection process on all respondents. In addition, our framework addresses the issue of separation, where for instance some factors are severely unbalanced and highly predictive of the response, which would ordinarily prevent model convergence. Estimation is carried out within a penalized likelihood framework where smoothing is achieved using a parametrization of the smoothing criterion which makes estimation more stable and efficient. We provide the software for straightforward implementation of the proposed approach, and apply our methodology to estimating national and sub-national HIV prevalence in Swaziland, Zimbabwe and Zambia.
Resumo:
Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.
Resumo:
For design-build (DB) projects, owners normally use lump sum and Guaranteed Maximum Price (GMP) as the major contract payment provisions. However, there was a lack of empirical studies to compare the project performance within different contract types and investigate how different project characteristics affect the owners’ selection of contract arrangement. Project information from Design-build Institute of America (DBIA) database was collected to reveal the statistical relationship between different project characteristics and contract types and to compare project performance between lump sum and GMP contract. The results show that lump sum is still the most frequently used contract method for DB projects, especially in the public sector. However, projects using GMP contract are more likely to have less schedule delay and cost overrun as compared to those with lump sum contract. The chi-square tests of cross tabulations reveal that project type, owner type, and procurement method affect the selection of contract types significantly. Civil infrastructure rather than industrial engineering project tends to use lump sum more frequently; and qualification-oriented contractor selection process resorts to GMP more often compared with cost-oriented process. The findings of this research contribute to the current body of knowledge concerning the effect of associated project characteristics on contract type selection. Overall, the results of this study provide empirical evidence from real DB projects that can be used by owners to select appropriate contract types and eventually improve future project performance.
Resumo:
This study examines the properties of Generalised Regression (GREG) estimators for domain class frequencies and proportions. The family of GREG estimators forms the class of design-based model-assisted estimators. All GREG estimators utilise auxiliary information via modelling. The classic GREG estimator with a linear fixed effects assisting model (GREG-lin) is one example. But when estimating class frequencies, the study variable is binary or polytomous. Therefore logistic-type assisting models (e.g. logistic or probit model) should be preferred over the linear one. However, other GREG estimators than GREG-lin are rarely used, and knowledge about their properties is limited. This study examines the properties of L-GREG estimators, which are GREG estimators with fixed-effects logistic-type models. Three research questions are addressed. First, I study whether and when L-GREG estimators are more accurate than GREG-lin. Theoretical results and Monte Carlo experiments which cover both equal and unequal probability sampling designs and a wide variety of model formulations show that in standard situations, the difference between L-GREG and GREG-lin is small. But in the case of a strong assisting model, two interesting situations arise: if the domain sample size is reasonably large, L-GREG is more accurate than GREG-lin, and if the domain sample size is very small, estimation of assisting model parameters may be inaccurate, resulting in bias for L-GREG. Second, I study variance estimation for the L-GREG estimators. The standard variance estimator (S) for all GREG estimators resembles the Sen-Yates-Grundy variance estimator, but it is a double sum of prediction errors, not of the observed values of the study variable. Monte Carlo experiments show that S underestimates the variance of L-GREG especially if the domain sample size is minor, or if the assisting model is strong. Third, since the standard variance estimator S often fails for the L-GREG estimators, I propose a new augmented variance estimator (A). The difference between S and the new estimator A is that the latter takes into account the difference between the sample fit model and the census fit model. In Monte Carlo experiments, the new estimator A outperformed the standard estimator S in terms of bias, root mean square error and coverage rate. Thus the new estimator provides a good alternative to the standard estimator.
Resumo:
A state-based micropolar peridynamic theory for linear elastic solids is proposed. The main motivation is to introduce additional micro-rotational degrees of freedom to each material point and thus naturally bring in the physically relevant material length scale parameters into peridynamics. Non-ordinary type modeling via constitutive correspondence is adopted here to define the micropolar peridynamic material. Along with a general three dimensional model, homogenized one dimensional Timoshenko type beam models for both the proposed micropolar and the standard non-polar peridynamic variants are derived. The efficacy of the proposed models in analyzing continua with length scale effects is established via numerical simulations of a few beam and plane-stress problems. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
This article considers a semi-infinite mathematical programming problem with equilibrium constraints (SIMPEC) defined as a semi-infinite mathematical programming problem with complementarity constraints. We establish necessary and sufficient optimality conditions for the (SIMPEC). We also formulate Wolfe- and Mond-Weir-type dual models for (SIMPEC) and establish weak, strong and strict converse duality theorems for (SIMPEC) and the corresponding dual problems under invexity assumptions.
Resumo:
Previous research has shown a strong positive correlation between short-term persistence and long-term output growth as well as between depreciation rates and long-term output growth. This evidence, therefore, contradicts the standard predictions from traditional neoclassical or AK-type growth models with exogenous depreciation. In this paper, we first confirm these findings for a larger sample of 101 countries. We then study the dynamics of growth and persistence in a model where both the depreciation rate and growth are endogenous and procyclical. We find that the model s predictions become consistent with the empirical evidence on persistence, long-term growth and depreciation rates.
Resumo:
Two different type scramjet models with side-wall compression and top-wall compression inlets have been tested in HPTF (Hypersonic Propulsion Test Facility) under the experimental conditions of Mach number 5.8, total temperature 1700K, total pressure 4.5MPa and mass flow rate 3.5kg/s. The liquid kerosene was used as main fuel for the scramjets. In order to get fast ignition in the combustor, a small amount of hydrogen was used as a pilot. A strut with alternative tail was employed for increasing the compression ratio and for mixing enhancement in the side-wall compression case. Recessed cavities were used as a flameholder for combustion stability. The combustion efficiency was estimated by one dimensional theory. The uniformity of the facility nozzle flow was verified by a scanning pitot rake. The experimental results showed that the kerosene fuel was successfully ignited and stable combustion was achieved for both scramjet models. However the thrusts were still less than the model drags due to the low combustion efficiencies.
Resumo:
For damaging response, the force-displacement relationship of a structure is highly nonlinear and history-dependent. For satisfactory analysis of such behavior, it is important to be able to characterize and to model the phenomenon of hysteresis accurately. A number of models have been proposed for response studies of hysteretic structures, some of which are examined in detail in this thesis. There are two popular classes of models used in the analysis of curvilinear hysteretic systems. The first is of the distributed element or assemblage type, which models the physical behavior of the system by using well-known building blocks. The second class of models is of the differential equation type, which is based on the introduction of an extra variable to describe the history dependence of the system.
Owing to their mathematical simplicity, the latter models have been used extensively for various applications in structural dynamics, most notably in the estimation of the response statistics of hysteretic systems subjected to stochastic excitation. But the fundamental characteristics of these models are still not clearly understood. A response analysis of systems using both the Distributed Element model and the differential equation model when subjected to a variety of quasi-static and dynamic loading conditions leads to the following conclusion: Caution must be exercised when employing the models belonging to the second class in structural response studies as they can produce misleading results.
The Massing's hypothesis, originally proposed for steady-state loading, can be extended to general transient loading as well, leading to considerable simplification in the analysis of the Distributed Element models. A simple, nonparametric identification technique is also outlined, by means of which an optimal model representation involving one additional state variable is determined for hysteretic systems.
Resumo:
This paper presents an assessment of the performance of an embedded propulsion system in the presence of distortion associated with boundary layer ingestion. For fan pressure ratios of interest for civil transports, the benefits of boundary layer ingestion are shown to be very sensitive to the magnitude of fan and duct losses. The distortion transfer across the fan, basically the comparison of the stagnation pressure non-uniformity downstream of the fan to that upstream of the fan, has a major role in determining the impact of boundary layer ingestion on overall fuel burn. This, in turn, puts requirements on the fidelity with which one needs to assess the distortion transfer, and thus the type of models that need to be used in such assessment. For the three-dimensional distortions associated with fuselage boundary layers ingested into a subsonic diffusing inlet, it is found that boundary layer ingestion can provide decreases in fuel burn of several per cent. It is also shown that a promising avenue for mitigating the risks (aerodynamic as well as aeromechanical) in boundary layer ingestion is to mix out the flow before it reaches the engine face.
Resumo:
Parallel strand models for base sequences d(A)(10). d(T)(10), d(AT)(5) . d(TA)(5), d(G(5)C(5)). d(C(5)G(5)), d(GC)(5) . d(CG)(5) and d(CTATAGGGAT). d(GATATCCCTA), where reverse Watson-Crick A-T pairing with two H-bonds and reverse Watson-Crick G-C pairing with one H-bond or with two H-bonds were adopted, and three models of d(T)(14). d(A)(14). d(T)(14) triple helix with different strand orientations were built up by molecular architecture and energy minimization. Comparisons of parallel duplex models with their corresponding B-DNA models and comparisons among the three triple helices showed: (i) conformational energies of parallel AT duplex models were a little lower, while for GC duplex models they were about 8% higher than that of their corresponding B-DNA models; (ii) the energy differences between parallel and B-type duplex models and among the three triple helices arose mainly from base stacking energies, especially for GC base pairing; (iii) the parallel duplexes with one H-bond G-C pairs were less stable than those with two H-bonds G-C pairs. The present paper includes a brief discussion about the effect of base stacking and base sequences on DNA conformations. (C) 1997 Academic Press Limited.
Resumo:
A phenol-degrading. microorganism, Alcaligenes faecalis, was used to study the substrate interactions during cell growth on phenol and m-cresol dual substrates. Both phenol and m-cresol could be utilized by the bacteria as,the sole carbon and energy sources. When cells grew on the mixture of phenol and m-cresol, strong substrate interactions were observed. m-Cresol inhibited the degradation of phenol, on the other hand, phenol also inhibited the utilization of m-cresol, the overall cell growth rate was the co-action of phenol and m-cresol. In addition, the cell growth and substrate degradation kinetics of phenol, m-cresol as single and mixed substrates for A. faecalis in batch cultures were also investigated over a wide range of initial phenol concentrations (10-1400 mg L-1) and initial m-cresol concentrations (5-200 mg L-1). The single-substrate kinetics was described well using the Haldane-type kinetic models, with model constants of it mu(m1) = 0.15 h(-1), K-S1 = 2.22 mg L-1 and K-i1 = 245.37 mg L-1 for cell growth on phenol and mu(m2) = 0.0782 h(-1), K-S2 = 1.30 mg L-1 and K-i2 = 71.77 mgL(-1), K-i2' = 5480 (mg L-1)(2) for cell growth on m-cresol. Proposed cell growth kinetic model was used to characterize the substrates interactions in the dual substrates system, the obtained parameters representing interactions between phenol and m-cresol were, K = 1.8 x 10(-6), M = 5.5 x 10(-5), Q = 6.7 x 10(-4). The results received in the experiments demonstrated that these models adequately described the dynamic behaviors of phenol and m-cresol as single and mixed substrates by the strain of A. faecalis.