76 resultados para Deterministic nanofabrication
Resumo:
This paper adds to the growing literature of market competition related to the remanufacturer by analyzing the model where the remanufacturer and the manufacturer collaborate with each other in the same channel. This paper investigates a single-period deterministic model which keeps the analysis simple so as to obtain sharper insights. The results characterize the optimal remanufacturing and pricing strategies for the remanufacturer and the manufacturer in the collaborative model
Resumo:
Baited cameras are often used for abundance estimation wherever alternative techniques are precluded, e.g. in abyssal systems and areas such as reefs. This method has thus far used models of the arrival process that are deterministic and, therefore, permit no estimate of precision.
Furthermore, errors due to multiple counting of fish and missing those not seen by the camera have restricted the technique to using only the time of first arrival, leaving a lot of data redundant. Here, we reformulate the arrival process using a stochastic model, which allows the precision of abundance
estimates to be quantified. Assuming a non-gregarious, cross-current-scavenging fish, we show that prediction of abundance from first arrival time is extremely uncertain. Using example data, we show
that simple regression-based prediction from the initial (rising) slope of numbers at the bait gives good precision, accepting certain assumptions. The most precise abundance estimates were obtained
by including the declining phase of the time series, using a simple model of departures, and taking account of scavengers beyond the camera’s view, using a hidden Markov model.
Resumo:
This paper investigates the performance of the tests proposed by Hadri and by Hadri and Larsson for testing for stationarity in heterogeneous panel data under model misspecification. The panel tests are based on the well known KPSS test (cf. Kwiatkowski et al.) which considers two models: stationarity around a deterministic level and stationarity around a deterministic trend. There is no study, as far as we know, on the statistical properties of the test when the wrong model is used. We also consider the case of the simultaneous presence of the two types of models in a panel. We employ two asymptotics: joint asymptotic, T, N -> infinity simultaneously, and T fixed and N allowed to grow indefinitely. We use Monte Carlo experiments to investigate the effects of misspecification in sample sizes usually used in practice. The results indicate that the assumption that T is fixed rather than asymptotic leads to tests that have less size distortions, particularly for relatively small T with large N panels (micro-panels) than the tests derived under the joint asymptotics. We also find that choosing a deterministic trend when a deterministic level is true does not significantly affect the properties of the test. But, choosing a deterministic level when a deterministic trend is true leads to extreme over-rejections. Therefore, when unsure about which model has generated the data, it is suggested to use the model with a trend. We also propose a new statistic for testing for stationarity in mixed panel data where the mixture is known. The performance of this new test is very good for both cases of T asymptotic and T fixed. The statistic for T asymptotic is slightly undersized when T is very small (
Resumo:
This article applies the panel stationarity test with a break proposed by Hadri and Rao (2008) to examine whether 14 macroeconomic variables of OECD countries can be best represented as random walk or stationary fluctuations around a deterministic trend. In contrast to previous studies, based essentially on visual inspection of the break type or just applying the most general break model, we use a model selection procedure based on BIC. We do this for each time series so that heterogeneous break models are allowed for in the panel. Our results suggest, overwhelmingly, that if we account for a structural break, cross-sectional dependence and choose the break models to be congruent with the data, then the null of stationarity cannot be rejected for all the 14 macroeconomic variables examined in this article. This is in sharp contrast with the results obtained by Hurlin (2004), using the same data but a different methodology.
Resumo:
Flutter prediction as currently practiced is almost always deterministic in nature, based on a single structural model that is assumed to represent a fleet of aircraft. However, it is also recognized that there can be significant structural variability, even for different flights of the same aircraft. The safety factor used for flutter clearance is in part meant to account for this variability. Simulation tools can, however, represent the consequences of structural variability in the flutter predictions, providing extra information that could be useful in planning physical tests and assessing risk. The main problem arising for this type of calculation when using high-fidelity tools based on computational fluid dynamics is the computational cost. The current paper uses an eigenvalue-based stability method together with Euler-level aerodynamics and different methods for propagating structural variability to stability predictions. The propagation methods are Monte Carlo, perturbation, and interval analysis. The feasibility of this type of analysis is demonstrated. Results are presented for the Goland wing and a generic fighter configuration.
Resumo:
In this paper the use of eigenvalue stability analysis of very large dimension aeroelastic numerical models arising from the exploitation of computational fluid dynamics is reviewed. A formulation based on a block reduction of the system Jacobian proves powerful to allow various numerical algorithms to be exploited, including frequency domain solvers, reconstruction of a term describing the fluid–structure interaction from the sparse data which incurs the main computational cost, and sampling to place the expensive samples where they are most needed. The stability formulation also allows non-deterministic analysis to be carried out very efficiently through the use of an approximate Newton solver. Finally, the system eigenvectors are exploited to produce nonlinear and parameterised reduced order models for computing limit cycle responses. The performance of the methods is illustrated with results from a number of academic and large dimension aircraft test cases.
Resumo:
Flutter prediction as currently practiced is usually deterministic, with a single structural model used to represent an aircraft. By using interval analysis to take into account structural variability, recent work has demonstrated that small changes in the structure can lead to very large changes in the altitude at which
utter occurs (Marques, Badcock, et al., J. Aircraft, 2010). In this follow-up work we examine the same phenomenon using probabilistic collocation (PC), an uncertainty quantification technique which can eficiently propagate multivariate stochastic input through a simulation code,
in this case an eigenvalue-based fluid-structure stability code. The resulting analysis predicts the consequences of an uncertain structure on incidence of
utter in probabilistic terms { information that could be useful in planning
flight-tests and assessing the risk of structural failure. The uncertainty in
utter altitude is confirmed to be substantial. Assuming that the structural uncertainty represents a epistemic uncertainty regarding the
structure, it may be reduced with the availability of additional information { for example aeroelastic response data from a flight-test. Such data is used to update the structural uncertainty using Bayes' theorem. The consequent
utter uncertainty is significantly reduced across the entire Mach number range.
Resumo:
The development of high performance, low computational complexity detection algorithms is a key challenge for real-time Multiple-Input Multiple-Output (MIMO) communication system design. The Fixed-Complexity Sphere Decoder (FSD) algorithm is one of the most promising approaches, enabling quasi-ML decoding accuracy and high performance implementation due to its deterministic, highly parallel structure. However, it suffers from exponential growth in computational complexity as the number of MIMO transmit antennas increases, critically limiting its scalability to larger MIMO system topologies. In this paper, we present a solution to this problem by applying a novel cutting protocol to the decoding tree of a real-valued FSD algorithm. The new Real-valued Fixed-Complexity Sphere Decoder (RFSD) algorithm derived achieves similar quasi-ML decoding performance as FSD, but with an average 70% reduction in computational complexity, as we demonstrate from both theoretical and implementation perspectives for Quadrature Amplitude Modulation (QAM)-MIMO systems.
Resumo:
Reliable prediction of long-term medical device performance using computer simulation requires consideration of variability in surgical procedure, as well as patient-specific factors. However, even deterministic simulation of long-term failure processes for such devices is time and resource consuming so that including variability can lead to excessive time to achieve useful predictions. This study investigates the use of an accelerated probabilistic framework for predicting the likely performance envelope of a device and applies it to femoral prosthesis loosening in cemented hip arthroplasty.
A creep and fatigue damage failure model for bone cement, in conjunction with an interfacial fatigue model for the implant–cement interface, was used to simulate loosening of a prosthesis within a cement mantle. A deterministic set of trial simulations was used to account for variability of a set of surgical and patient factors, and a response surface method was used to perform and accelerate a Monte Carlo simulation to achieve an estimate of the likely range of prosthesis loosening. The proposed framework was used to conceptually investigate the influence of prosthesis selection and surgical placement on prosthesis migration.
Results demonstrate that the response surface method is capable of dramatically reducing the time to achieve convergence in mean and variance of predicted response variables. A critical requirement for realistic predictions is the size and quality of the initial training dataset used to generate the response surface and further work is required to determine the recommendations for a minimum number of initial trials. Results of this conceptual application predicted that loosening was sensitive to the implant size and femoral width. Furthermore, different rankings of implant performance were predicted when only individual simulations (e.g. an average condition) were used to rank implants, compared with when stochastic simulations were used. In conclusion, the proposed framework provides a viable approach to predicting realistic ranges of loosening behaviour for orthopaedic implants in reduced timeframes compared with conventional Monte Carlo simulations.
Resumo:
In this paper we study the influence of interventions on self-interactions in a spatial Prisoner's Dilemma on a two-dimensional grid with periodic boundary conditions and synchronous updating of the dynamics. We investigate two different types of self-interaction modifications. The first type (FSIP) is deterministic, effecting each self-interaction of a player by a constant factor, whereas the second type (PSIP) performs a probabilistic interventions. Both types of interventions lead to a reduction of the payoff of the players and, hence, represent inhibiting effects. We find that a constant but moderate reduction of self-interactions has a very beneficial effect on the evolution of cooperators in the population, whereas probabilistic interventions on self-interactions are in general counter productive for the coexistence of the two different strategies. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
In this paper, we present a Bayesian approach to estimate a chromosome and a disorder network from the Online Mendelian Inheritance in Man (OMIM) database. In contrast to other approaches, we obtain statistic rather than deterministic networks enabling a parametric control in the uncertainty of the underlying disorder-disease gene associations contained in the OMIM, on which the networks are based. From a structural investigation of the chromosome network, we identify three chromosome subgroups that reflect architectural differences in chromosome-disorder associations that are predictively exploitable for a functional analysis of diseases.
Resumo:
Adaptive Multiple-Input Multiple-Output (MIMO) systems achieve a much higher information rate than conventional fixed schemes due to their ability to adapt their configurations according to the wireless communications environment. However, current adaptive MIMO detection schemes exhibit either low performance (and hence low spectral efficiency) or huge computational
complexity. In particular, whilst deterministic Sphere Decoder (SD) detection schemes are well established for static MIMO systems, exhibiting deterministic parallel structure, low computational complexity and quasi-ML detection performance, there are no corresponding adaptive schemes. This paper solves
this problem, describing a hybrid tree based adaptive modulation detection scheme. Fixed Complexity Sphere Decoding (FSD) and Real-Values FSD (RFSD) are modified and combined into a hybrid scheme exploited at low and medium SNR to provide the highest possible information rate with quasi-ML Bit Error
Rate (BER) performance, while Reduced Complexity RFSD, BChase and Decision Feedback (DFE) schemes are exploited in the high SNR regions. This algorithm provides the facility to balance the detection complexity with BER performance with compatible information rate in dynamic, adaptive MIMO communications
environments.
Resumo:
The influence of annular aperture parameters on the optical transmission through arrays of coaxial apertures in a metal film on high refractive index substrates has been investigated experimentally and numerically. It is shown that the transmission resonances are related to plasmonic crystal effects rather than frequency cutoff behavior associated with annular apertures. The role of deviations from ideal aperture shape occurring during the fabrication process has also been studied. Annular aperture arrays are often considered in many applications for achieving high optical transmission through metal films and understanding of nanofabrication tolerances are important. (C) 2010 American Institute of Physics.
Resumo:
mRNA chimeras from chromosomal translocations often play a role as transforming oncogenes. However, cancer transcriptomes also contain mRNA chimeras that may play a role in tumor development, which arise as transcriptional or post-transcriptional events. To identify such chimeras, we developed a deterministic screening strategy for long-range sequence analysis. High-throughput, long-read sequencing was then performed on cDNA libraries from major tumor histotypes and corresponding normal tissues. These analyses led to the identification of 378 chimeras, with an unexpectedly high frequency of expression (˜2 x 10(-5) of all mRNA). Functional assays in breast and ovarian cancer cell lines showed that a large fraction of mRNA chimeras regulates cell replication. Strikingly, chimeras were shown to include both positive and negative regulators of cell growth, which functioned as such in a cell-type-specific manner. Replication-controlling chimeras were found to be expressed by most cancers from breast, ovary, colon, uterus, kidney, lung, and stomach, suggesting a widespread role in tumor development.
Resumo:
An appreciation of the quantity of streamflow derived from the main hydrological pathways involved in transporting diffuse contaminants is critical when addressing a wide range of water resource management issues. In order to assess hydrological pathway contributions to streams, it is necessary to provide feasible upper and lower bounds for flows in each pathway. An important first step in this process is to provide reliable estimates of the slower responding groundwater pathways and subsequently the quicker overland and interflow pathways. This paper investigates the effectiveness of a multi-faceted approach applying different hydrograph separation techniques, supplemented by lumped hydrological modelling, for calculating the Baseflow Index (BFI), for the development of an integrated approach to hydrograph separation. A semi-distributed, lumped and deterministic rainfall runoff model known as NAM has been applied to ten catchments (ranging from 5 to 699 km2). While this modelling approach is useful as a validation method, NAM itself is also an important tool for investigation. These separation techniques provide a large variation in BFI, a difference of 0.741 predicted for BFI in a catchment with the less reliable fixed and sliding interval methods and local minima turning point methods included. This variation is reduced to 0.167 with these methods omitted. The Boughton and Eckhardt algorithms, while quite subjective in their use, provide quick and easily implemented approaches for obtaining physically realistic hydrograph separations. It is observed that while the different separation techniques give varying BFI values for each of the catchments, a recharge coefficient approach developed in Ireland, when applied in conjunction with the Master recession Curve Tabulation method, predict estimates in agreement with those obtained using the NAM model, and these estimates are also consistent with the study catchments’ geology. These two separation methods, in conjunction with the NAM model, were selected to form an integrated approach to assessing BFI in catchments.