931 resultados para optimization of production processes
Resumo:
A quadratic programming optimization procedure for designing asymmetric apodization windows tailored to the shape of time-domain sample waveforms recorded using a terahertz transient spectrometer is proposed. By artificially degrading the waveforms, the performance of the designed window in both the time and the frequency domains is compared with that of conventional rectangular, triangular (Mertz), and Hamming windows. Examples of window optimization assuming Gaussian functions as the building elements of the apodization window are provided. The formulation is sufficiently general to accommodate other basis functions. (C) 2007 Optical Society of America
Resumo:
We show that an analysis of the mean and variance of discrete wavelet coefficients of coaveraged time-domain interferograms can be used as a specification for determining when to stop coaveraging. We also show that, if a prediction model built in the wavelet domain is used to determine the composition of unknown samples, a stopping criterion for the coaveraging process can be developed with respect to the uncertainty tolerated in the prediction.
Resumo:
We give necessary and sufficient conditions for a pair of (generali- zed) functions 1(r1) and 2(r1, r2), ri 2X, to be the density and pair correlations of some point process in a topological space X, for ex- ample, Rd, Zd or a subset of these. This is an infinite-dimensional version of the classical “truncated moment” problem. Standard tech- niques apply in the case in which there can be only a bounded num- ber of points in any compact subset of X. Without this restriction we obtain, for compact X, strengthened conditions which are necessary and sufficient for the existence of a process satisfying a further re- quirement—the existence of a finite third order moment. We general- ize the latter conditions in two distinct ways when X is not compact.
Resumo:
Over many millions of years of independent evolution, placental, marsupial and monotreme mammals have diverged conspicuously in physiology, life history and reproductive ecology. The differences in life histories are particularly striking. Compared with placentals, marsupials exhibit shorter pregnancy, smaller size of offspring at birth and longer period of lactation in the pouch. Monotremes also exhibit short pregnancy, but incubate embryos in eggs, followed by a long period of post-hatching lactation. Using a large sample of mammalian species, we show that, remarkably, despite their very different life histories, the scaling of production rates is statistically indistinguishable across mammalian lineages. Apparently all mammals are subject to the same fundamental metabolic constraints on productivity, because they share similar body designs, vascular systems and costs of producing new tissue.
Resumo:
Experimental results of the temperature dependence of the nonlinear optical response of methyl red doped polymethylmethacrylate films in the range 20°C to 170°C are reported. It is found that the intensity of the phase conjugate signal resulting from degenerate four-wave mixing using pump and probe beams with parallel polarisation states increases dramatically on heating by a factor of ∼ 10, reaching a maximum at ∼ 100°C. The intensity of the phase conjugate signal for the case with crossed polarisation states of the pump and probe beams drops monotonically with increasing temperature. For both configurations the response time shortens with increasing temperature. The particular role of the polymer matrix in this temperature variation of the nonlinear optical response is discussed.
Resumo:
Pardo, Patie, and Savov derived, under mild conditions, a Wiener-Hopf type factorization for the exponential functional of proper Lévy processes. In this paper, we extend this factorization by relaxing a finite moment assumption as well as by considering the exponential functional for killed Lévy processes. As a by-product, we derive some interesting fine distributional properties enjoyed by a large class of this random variable, such as the absolute continuity of its distribution and the smoothness, boundedness or complete monotonicity of its density. This type of results is then used to derive similar properties for the law of maxima and first passage time of some stable Lévy processes. Thus, for example, we show that for any stable process with $\rho\in(0,\frac{1}{\alpha}-1]$, where $\rho\in[0,1]$ is the positivity parameter and $\alpha$ is the stable index, then the first passage time has a bounded and non-increasing density on $\mathbb{R}_+$. We also generate many instances of integral or power series representations for the law of the exponential functional of Lévy processes with one or two-sided jumps. The proof of our main results requires different devices from the one developed by Pardo, Patie, Savov. It relies in particular on a generalization of a transform recently introduced by Chazal et al together with some extensions to killed Lévy process of Wiener-Hopf techniques. The factorizations developed here also allow for further applications which we only indicate here also allow for further applications which we only indicate here.
Resumo:
We present, pedagogically, the Bayesian approach to composed error models under alternative, hierarchical characterizations; demonstrate, briefly, the Bayesian approach to model comparison using recent advances in Markov Chain Monte Carlo (MCMC) methods; and illustrate, empirically, the value of these techniques to natural resource economics and coastal fisheries management, in particular. The Bayesian approach to fisheries efficiency analysis is interesting for at least three reasons. First, it is a robust and highly flexible alternative to commonly applied, frequentist procedures, which dominate the literature. Second,the Bayesian approach is extremely simple to implement, requiring only a modest addition to most natural-resource economist tool-kits. Third, despite its attractions, applications of Bayesian methodology in coastal fisheries management are few.
Resumo:
Duchenne muscular dystrophy is a fatal muscle-wasting disorder. Lack of dystrophin compromises the integrity of the sarcolemma and results in myofibers that are highly prone to contraction-induced injury. Recombinant adenoassociated virus (rAAV)-mediated dystrophin gene transfer strategies to muscle for the treatment of Duchenne muscular dystrophy (DMD) have been limited by the small cloning capacity of rAAV vectors and high titers necessary to achieve efficient systemic gene transfer. In this study, we assess the impact of codon optimization on microdystrophin (ΔAB/R3-R18/ΔCT) expression and function in the mdx mouse and compare the function of two different configurations of codon-optimized microdystrophin genes (ΔAB/R3-R18/ΔCT and ΔR4-R23/ΔCT) under the control of a muscle-restrictive promoter (Spc5-12). Codon optimization of microdystrophin significantly increases levels of microdystrophin mRNA and protein after intramuscular and systemic administration of plasmid DNA or rAAV2/8. Physiological assessment demonstrates that codon optimization of ΔAB/R3-R18/ΔCT results in significant improvement in specific force, but does not improve resistance to eccentric contractions compared with noncodon-optimized ΔAB/ R3-R18/ΔCT. However, codon-optimized microdystrophin ΔR4-R23/ΔCT completely restored specific force generation and provided substantial protection from contraction-induced injury. These results demonstrate that codon optimization of microdystrophin under the control of a muscle-specific promoter can significantly improve expression levels such that reduced titers of rAAV vectors will be required for efficient systemic administration.
Resumo:
A novel version of the classical surface pressure tendency equation (PTE) is applied to ERA-Interim reanalysis data to quantitatively assess the contribution of diabatic processes to the deepening of extratropical cyclones relative to effects of temperature advection and vertical motions. The five cyclone cases selected, Lothar and Martin in December 1999, Kyrill in January 2007, Klaus in January 2009, and Xynthia in February 2010, all showed explosive deepening and brought considerable damage to parts of Europe. For Xynthia, Klaus and Lothar diabatic processes contribute more to the observed surface pressure fall than horizontal temperature advection during their respective explosive deepening phases, while Kyrill and Martin appear to be more baroclinically driven storms. The powerful new diagnostic tool presented here can easily be applied to large numbers of cyclones and will help to better understand the role of diabatic processes in future changes in extratropical storminess.
Resumo:
For a Lévy process ξ=(ξt)t≥0 drifting to −∞, we define the so-called exponential functional as follows: Formula Under mild conditions on ξ, we show that the following factorization of exponential functionals: Formula holds, where × stands for the product of independent random variables, H− is the descending ladder height process of ξ and Y is a spectrally positive Lévy process with a negative mean constructed from its ascending ladder height process. As a by-product, we generate an integral or power series representation for the law of Iξ for a large class of Lévy processes with two-sided jumps and also derive some new distributional properties. The proof of our main result relies on a fine Markovian study of a class of generalized Ornstein–Uhlenbeck processes, which is itself of independent interest. We use and refine an alternative approach of studying the stationary measure of a Markov process which avoids some technicalities and difficulties that appear in the classical method of employing the generator of the dual Markov process.
Resumo:
During the last termination (from ~18 000 years ago to ~9000 years ago), the climate significantly warmed and the ice sheets melted. Simultaneously, atmospheric CO2 increased from ~190 ppm to ~260 ppm. Although this CO2 rise plays an important role in the deglacial warming, the reasons for its evolution are difficult to explain. Only box models have been used to run transient simulations of this carbon cycle transition, but by forcing the model with data constrained scenarios of the evolution of temperature, sea level, sea ice, NADW formation, Southern Ocean vertical mixing and biological carbon pump. More complex models (including GCMs) have investigated some of these mechanisms but they have only been used to try and explain LGM versus present day steady-state climates. In this study we use a coupled climate-carbon model of intermediate complexity to explore the role of three oceanic processes in transient simulations: the sinking of brines, stratification-dependent diffusion and iron fertilization. Carbonate compensation is accounted for in these simulations. We show that neither iron fertilization nor the sinking of brines alone can account for the evolution of CO2, and that only the combination of the sinking of brines and interactive diffusion can simultaneously simulate the increase in deep Southern Ocean δ13C. The scenario that agrees best with the data takes into account all mechanisms and favours a rapid cessation of the sinking of brines around 18 000 years ago, when the Antarctic ice sheet extent was at its maximum. In this scenario, we make the hypothesis that sea ice formation was then shifted to the open ocean where the salty water is quickly mixed with fresher water, which prevents deep sinking of salty water and therefore breaks down the deep stratification and releases carbon from the abyss. Based on this scenario, it is possible to simulate both the amplitude and timing of the long-term CO2 increase during the last termination in agreement with ice core data. The atmospheric δ13C appears to be highly sensitive to changes in the terrestrial biosphere, underlining the need to better constrain the vegetation evolution during the termination.
Resumo:
A stand-alone sea ice model is tuned and validated using satellite-derived, basinwide observations of sea ice thickness, extent, and velocity from the years 1993 to 2001. This is the first time that basin-scale measurements of sea ice thickness have been used for this purpose. The model is based on the CICE sea ice model code developed at the Los Alamos National Laboratory, with some minor modifications, and forcing consists of 40-yr ECMWF Re-Analysis (ERA-40) and Polar Exchange at the Sea Surface (POLES) data. Three parameters are varied in the tuning process: Ca, the air–ice drag coefficient; P*, the ice strength parameter; and α, the broadband albedo of cold bare ice, with the aim being to determine the subset of this three-dimensional parameter space that gives the best simultaneous agreement with observations with this forcing set. It is found that observations of sea ice extent and velocity alone are not sufficient to unambiguously tune the model, and that sea ice thickness measurements are necessary to locate a unique subset of parameter space in which simultaneous agreement is achieved with all three observational datasets.