933 resultados para Optimal fusion performance
Resumo:
Objective In this study, we have used a chemometrics-based method to correlate key liposomal adjuvant attributes with in-vivo immune responses based on multivariate analysis. Methods The liposomal adjuvant composed of the cationic lipid dimethyldioctadecylammonium bromide (DDA) and trehalose 6,6-dibehenate (TDB) was modified with 1,2-distearoyl-sn-glycero-3-phosphocholine at a range of mol% ratios, and the main liposomal characteristics (liposome size and zeta potential) was measured along with their immunological performance as an adjuvant for the novel, postexposure fusion tuberculosis vaccine, Ag85B-ESAT-6-Rv2660c (H56 vaccine). Partial least square regression analysis was applied to correlate and cluster liposomal adjuvants particle characteristics with in-vivo derived immunological performances (IgG, IgG1, IgG2b, spleen proliferation, IL-2, IL-5, IL-6, IL-10, IFN-γ). Key findings While a range of factors varied in the formulations, decreasing the 1,2-distearoyl-sn-glycero-3-phosphocholine content (and subsequent zeta potential) together built the strongest variables in the model. Enhanced DDA and TDB content (and subsequent zeta potential) stimulated a response skewed towards a cell mediated immunity, with the model identifying correlations with IFN-γ, IL-2 and IL-6. Conclusion This study demonstrates the application of chemometrics-based correlations and clustering, which can inform liposomal adjuvant design.
Resumo:
In this work, we determine the coset weight spectra of all binary cyclic codes of lengths up to 33, ternary cyclic and negacyclic codes of lengths up to 20 and of some binary linear codes of lengths up to 33 which are distance-optimal, by using some of the algebraic properties of the codes and a computer assisted search. Having these weight spectra the monotony of the function of the undetected error probability after t-error correction P(t)ue (C,p) could be checked with any precision for a linear time. We have used a programm written in Maple to check the monotony of P(t)ue (C,p) for the investigated codes for a finite set of points of p € [0, p/(q-1)] and in this way to determine which of them are not proper.
Resumo:
Link quality-based rate adaptation has been widely used for IEEE 802.11 networks. However, network performance is affected by both link quality and random channel access. Selection of transmit modes for optimal link throughput can cause medium access control (MAC) throughput loss. In this paper, we investigate this issue and propose a generalised cross-layer rate adaptation algorithm. It considers jointly link quality and channel access to optimise network throughput. The objective is to examine the potential benefits by cross-layer design. An efficient analytic model is proposed to evaluate rate adaptation algorithms under dynamic channel and multi-user access environments. The proposed algorithm is compared to link throughput optimisation-based algorithm. It is found rate adaptation by optimising link layer throughput can result in large performance loss, which cannot be compensated by the means of optimising MAC access mechanism alone. Results show cross-layer design can achieve consistent and considerable performance gains of up to 20%. It deserves to be exploited in practical design for IEEE 802.11 networks.
Resumo:
The influence of IT investment on hospital efficiency and quality are of great interest to healthcare executives as well as insurers. Few studies have examined how IT investments influence both efficiency and quality or whether there is an optimal IT investment level that influences both in the desired direction. Decision makers in healthcare wonder if there are tradeoffs between their pursuit of hospital operational efficiency and quality. Our study involving a 2-stage double bootstrap DEA analysis of 187 US hospitals over 2. years found direct effects of IT investment upon service quality and a moderating effect of quality upon operational efficiency. Further, our findings indicate a U-shaped relationship between IT investments and operational efficiency suggesting that IT investments have diminishing returns beyond a certain point.
Resumo:
In this letter, we derive continuum equations for the generalization error of the Bayesian online algorithm (BOnA) for the one-layer perceptron with a spherical covariance matrix using the Rosenblatt potential and show, by numerical calculations, that the asymptotic performance of the algorithm is the same as the one for the optimal algorithm found by means of variational methods with the added advantage that the BOnA does not use any inaccessible information during learning. © 2007 IEEE.
Resumo:
We investigate the transmission performance of advanced modulation formats in nonlinear regenerative channels based on cascaded phase sensitive amplifiers. We identify the impact of amplitude and phase noise dynamics along the transmission line and show that after a cascade of regenerators, densely packed single ring PSK constellations outperform multi-ring constellations. The results of this study will greatly simplify the design of future nonlinear regenerative channels for ultra-high capacity transmission. © 2013 Optical Society of America.
Resumo:
We examine the correlations between the parameters of ultra-narrow off-centred filtering and pulse width on the performance of a wavelength paired Nx40Gbit/s DWDM transmission, consisting of carrier suppressed return-to-zero signal with 0.64 bit/s/Hz (without polarization-division multiplexing) spectral efficiency. © 2004 Optical Society of America.
Resumo:
Mathematics Subject Classification: 26A33; 93C15, 93C55, 93B36, 93B35, 93B51; 03B42; 70Q05; 49N05
Resumo:
Tool life is an important factor to be considered during the optimisation of a machining process since cutting parameters can be adjusted to optimise tool changing, reducing cost and time of production. Also the performance of a tool is directly linked to the generated surface roughness and this is important in cases where there are strict surface quality requirements. The prediction of tool life and the resulting surface roughness in milling operations has attracted considerable research efforts. The research reported herein is focused on defining the influence of milling cutting parameters such as cutting speed, feed rate and axial depth of cut, on three major tool performance parameters namely, tool life, material removal and surface roughness. The research is seeking to define methods that will allow the selection of optimal parameters for best tool performance when face milling 416 stainless steel bars. For this study the Taguchi method was applied in a special design of an orthogonal array that allows studying the entire parameter space with only a number of experiments representing savings in cost and time of experiments. The findings were that the cutting speed has the most influence on tool life and surface roughness and very limited influence on material removal. By last tool life can be judged either from tool life or volume of material removal.
Resumo:
Premium Intraocular Lenses (IOLs) such as toric IOLs, multifocal IOLs (MIOLs) and accommodating IOLs (AIOLs) can provide better refractive and visual outcomes compared to standard monofocal designs, leading to greater levels of post-operative spectacle independence. The principal theme of this thesis relates to the development of new assessment techniques that can help to improve future premium IOL design. IOLs designed to correct astigmatism form the focus of the first part of the thesis. A novel toric IOL design was devised to decrease the effect of toric rotation on patient visual acuity, but found to have neither a beneficial or detrimental impact on visual acuity retention. IOL tilt, like rotation, may curtail visual performance; however current IOL tilt measurement techniques require the use of specialist equipment not readily available in most ophthalmological clinics. Thus a new idea that applied Pythagoras’s theory to digital images of IOL optic symmetricality in order to calculate tilt was proposed, and shown to be both accurate and highly repeatable. A literature review revealed little information on the relationship between IOL tilt, decentration and rotation and so this was examined. A poor correlation between these factors was found, indicating they occur independently of each other. Next, presbyopia correcting IOLs were investigated. The light distribution of different MIOLs and an AIOL was assessed using perimetry, to establish whether this could be used to inform optimal IOL design. Anticipated differences in threshold sensitivity between IOLs were not however found, thus perimetry was concluded to be ineffective in mapping retinal projection of blur. The observed difference between subjective and objective measures of accommodation, arising from the influence of pseudoaccommodative factors, was explored next to establish how much additional objective power would be required to restore the eye’s focus with AIOLs. Blur tolerance was found to be the key contributor to the ocular depth of focus, with an approximate dioptric influence of 0.60D. Our understanding of MIOLs may be limited by the need for subjective defocus curves, which are lengthy and do not permit important additional measures to be undertaken. The use of aberrometry to provide faster objective defocus curves was examined. Although subjective and objective measures related well, the peaks of the MIOL defocus curve profile were not evident with objective prediction of acuity, indicating a need for further refinement of visual quality metrics based on ocular aberrations. The experiments detailed in the thesis evaluate methods to improve visual performance with toric IOLs. They also investigate new techniques to allow more rapid post-operative assessment of premium IOLs, which could allow greater insights to be obtained into several aspects of visual quality, in order to optimise future IOL design and ultimately enhance patient satisfaction.
Resumo:
Big data comes in various ways, types, shapes, forms and sizes. Indeed, almost all areas of science, technology, medicine, public health, economics, business, linguistics and social science are bombarded by ever increasing flows of data begging to be analyzed efficiently and effectively. In this paper, we propose a rough idea of a possible taxonomy of big data, along with some of the most commonly used tools for handling each particular category of bigness. The dimensionality p of the input space and the sample size n are usually the main ingredients in the characterization of data bigness. The specific statistical machine learning technique used to handle a particular big data set will depend on which category it falls in within the bigness taxonomy. Large p small n data sets for instance require a different set of tools from the large n small p variety. Among other tools, we discuss Preprocessing, Standardization, Imputation, Projection, Regularization, Penalization, Compression, Reduction, Selection, Kernelization, Hybridization, Parallelization, Aggregation, Randomization, Replication, Sequentialization. Indeed, it is important to emphasize right away that the so-called no free lunch theorem applies here, in the sense that there is no universally superior method that outperforms all other methods on all categories of bigness. It is also important to stress the fact that simplicity in the sense of Ockham’s razor non-plurality principle of parsimony tends to reign supreme when it comes to massive data. We conclude with a comparison of the predictive performance of some of the most commonly used methods on a few data sets.
Resumo:
In this paper, we investigate the hop distance optimization problem in ad hoc networks where cooperative multiinput- single-output (MISO) is adopted to improve the energy efficiency of the network. We first establish the energy model of multihop cooperative MISO transmission. Based on the model, the energy consumption per bit of the network with high node density is minimized numerically by finding an optimal hop distance, and, to get the global minimum energy consumption, both hop distance and the number of cooperating nodes around each relay node for multihop transmission are jointly optimized. We also compare the performance between multihop cooperative MISO transmission and single-input-single-output (SISO) transmission, under the same network condition (high node density). We show that cooperative MISO transmission could be energyinefficient compared with SISO transmission when the path-loss exponent becomes high. We then extend our investigation to the networks with varied node densities and show the effectiveness of the joint optimization method in this scenario using simulation results. It is shown that the optimal results depend on network conditions such as node density and path-loss exponent, and the simulation results are closely matched to those obtained using the numerical models for high node density cases.
Resumo:
Market orientation (MO) and marketing performance measurement (MPM) are two of the most widespread strategic marketing concepts among practitioners. However, some have questioned the benefits of extensive investments in MO and MPM. More importantly, little is known about which combinations of MO and MPM are optimal in ensuring high business performance. To address this research gap, the authors analyze a unique data set of 628 firms with a novel method of configurational analysis: fuzzy-set qualitative comparative analysis. In line with prior research, the authors find that MO is an important determinant of business performance. However, to reap its benefits, managers need to complement it with appropriate MPM, the level and focus of which vary across firms. For example, whereas large firms and market leaders generally benefit from comprehensive MPM, small firms may benefit from measuring marketing performance only selectively or by focusing on particular dimensions of marketing performance. The study also finds that many of the highest-performing firms do not follow any of the particular best practices identified.
Resumo:
The implementation of country-specific supply chain coordination techniques ensures an optimal global supply chain performance. This paper looks at success factors going with a supply chain coordination strategy within the global supply chain of a successful, medium-sized, privately-owned company, one having locations in North America (USA), Europe (Hungary) and Asia (China). Through the shown GSL’s Chinese plant, we will endeavor to argue that increased collaboration in the supply network with appropriate supply chain coordination brings down not only the inventory level but can also improve conformance quality and reduce quoted lead times.
Resumo:
Since the seminal works of Markowitz (1952), Sharpe (1964), and Lintner (1965), numerous studies on portfolio selection and performance measure have been based upon the mean-variance framework. However, several researchers (e.g., Arditti (1967, and 1971), Samuelson (1970), and Rubinstein (1973)) argue that the higher moments cannot be neglected unless there is reason to believe that: (i) the asset returns are normally distributed and the investor's utility function is quadratic, or (ii) the empirical evidence demonstrates that higher moments are irrelevant to the investor's decision. Based on the same argument, this dissertation investigates the impact of higher moments of return distributions on three issues concerning the 14 international stock markets.^ First, the portfolio selection with skewness is determined using: the Polynomial Goal Programming in which investor preferences for skewness can be incorporated. The empirical findings suggest that the return distributions of international stock markets are not normally distributed, and that the incorporation of skewness into an investor's portfolio decision causes a major change in the construction of his optimal portfolio. The evidence also indicates that an investor will trade expected return of the portfolio for skewness. Moreover, when short sales are allowed, investors are better off as they attain higher expected return and skewness simultaneously.^ Second, the performance of international stock markets are evaluated using two types of performance measures: (i) the two-moment performance measures of Sharpe (1966), and Treynor (1965), and (ii) the higher-moment performance measures of Prakash and Bear (1986), and Stephens and Proffitt (1991). The empirical evidence indicates that higher moments of return distributions are significant and relevant to the investor's decision. Thus, the higher moment performance measures should be more appropriate to evaluate the performances of international stock markets. The evidence also indicates that various measures provide a vastly different performance ranking of the markets, albeit in the same direction.^ Finally, the inter-temporal stability of the international stock markets is investigated using the Parhizgari and Prakash (1989) algorithm for the Sen and Puri (1968) test which accounts for non-normality of return distributions. The empirical finding indicates that there is strong evidence to support the stability in international stock market movements. However, when the Anderson test which assumes normality of return distributions is employed, the stability in the correlation structure is rejected. This suggests that the non-normality of the return distribution is an important factor that cannot be ignored in the investigation of inter-temporal stability of international stock markets. ^