963 resultados para In-sample


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Purpose – Role clarity of frontline staff is critical to their perceptions of service quality in call centres. The purpose of this study is to examine the effects of role clarity and its antecedents and consequences on employee-perceived service quality. Design/methodology/approach – A conceptual model, based on the job characteristics model and cognitive theories, is proposed. Key antecedents of role clarity considered here are feedback, autonomy, participation, supervisory consideration, and team support; while key consequences are organizational commitment, job satisfaction and service quality. An internal marketing approach is adopted and all variables are measured from the frontline employee's perspective. A structural equation model is developed and tested on a sample of 342 call centre representatives of a major commercial bank in the UK. Findings – The research reveals that role clarity plays a critical role in explaining employee perceptions of service quality. Further, the research findings indicate that feedback, participation and team support significantly influence role clarity, which in turn influences job satisfaction and organizational commitment. Research limitations/implications – The research suggests that boundary personnel in service firms should strive for more clarity in perceived role for delivering better service quality. The limitations are in sample availability from in-house transaction call centres of a single bank. Originality/value – The contributions of this study are untangling the confusing research evidence on the effect of role clarity on service quality, using service quality as a performance variable as opposed to productivity estimates, adopting an internal marketing approach to understanding the phenomenon, and introducing teamwork along with job-design and supervisory factors as antecedent to role clarity.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper examines whether the observed long memory behavior of log-range series is to some extent spurious and whether it can be explained by the presence of structural breaks. Utilizing stock market data we show that the characterization of log-range series as long memory processes can be a strong assumption. Moreover, we find that all examined series experience a large number of significant breaks. Once the breaks are accounted for, the volatility persistence is eliminated. Overall, the findings suggest that volatility can be adequately represented, at least in-sample, through a multiple breaks process and a short run component.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The importance of the changeover process in the manufacturing industry is becoming widely recognised. Changeover is a complete process of changing between the manufacture of one product to manufacture of an alternative product until specified production and quality rates are reached. The initiatives to improve changeover exist in industry, as better changeover process typically contribute to improved quality performance. A high-quality and reliable changeover process can be achieved through implementation of continuous or radical improvements. This research examines the changeover process of Saudi Arabian manufacturing firms because Saudi Arabia’s government is focused on the expansion of GDP and increasing the number of export manufacturing firms. Furthermore, it is encouraging foreign manufacturing firms to invest within Saudi Arabia. These initiatives, therefore, require that Saudi manufacturing businesses develop the changeover practice in order to compete in the market and achieve the government’s objectives. Therefore, the aim of this research is to discover the current status of changeover process implementation in Saudi Arabian manufacturing businesses. To achieve this aim, the main objective of this research is to develop a conceptual model to understand and examine the effectiveness of the changeover process within Saudi Arabian manufacturing firms, facilitating identification of those activities that affect the reliability and high-quality of the process. In order to provide a comprehensive understanding of this area, this research first explores the concept of quality management and its relationship to firm performance and the performance of manufacturing changeover. An extensive body of literature was reviewed on the subject of lean manufacturing and changeover practice. A research conceptual model was identified based on this review, and focus was on providing high-quality and reliable manufacturing changeover processes during set-up in a dynamic environment. Exploratory research was conducted in sample Saudi manufacturing firms to understand the features of the changeover process within the manufacturing sector, and as a basis for modifying the proposed conceptual model. Qualitative research was employed in the study with semi-structured interviews, direct observations and documentation in order to understand the real situation such as actual daily practice and current status of changeover process in the field. The research instrument, the Changeover Effectiveness Assessment Tool (CEAT) was developed to evaluate changeover practices. A pilot study was conducted by examining the CEAT, proposed for the main research. Consequently, the conceptual model was modified and CEAT was improved in response to the pilot study findings. Case studies have been conducted within eight Saudi manufacturing businesses. These case studies assessed the implementation of manufacturing changeover practice in the lighting and medical products sectors. These two sectors were selected based on their operation strategy which was batch production as well as the fact that they fulfilled the research sampling strategy. The outcomes of the research improved the conceptual model, ultimately to facilitate the firms’ adoption and rapid implementation of a high-quality and reliability changeover during the set-up process. The main finding of this research is that Quality’s factors were considering the lowest levels comparing to the other factors which are People, Process and Infrastructure. This research contributes to enable Saudi businesses to implement the changeover process by adopting the conceptual model. In addition, the guidelines for facilitating implementation were provided in this thesis. Therefore, this research provides insight to enable the Saudi manufacturing industry to be more responsive to rapidly changing customer demands.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Fitting statistical models is computationally challenging when the sample size or the dimension of the dataset is huge. An attractive approach for down-scaling the problem size is to first partition the dataset into subsets and then fit using distributed algorithms. The dataset can be partitioned either horizontally (in the sample space) or vertically (in the feature space), and the challenge arise in defining an algorithm with low communication, theoretical guarantees and excellent practical performance in general settings. For sample space partitioning, I propose a MEdian Selection Subset AGgregation Estimator ({\em message}) algorithm for solving these issues. The algorithm applies feature selection in parallel for each subset using regularized regression or Bayesian variable selection method, calculates the `median' feature inclusion index, estimates coefficients for the selected features in parallel for each subset, and then averages these estimates. The algorithm is simple, involves very minimal communication, scales efficiently in sample size, and has theoretical guarantees. I provide extensive experiments to show excellent performance in feature selection, estimation, prediction, and computation time relative to usual competitors.

While sample space partitioning is useful in handling datasets with large sample size, feature space partitioning is more effective when the data dimension is high. Existing methods for partitioning features, however, are either vulnerable to high correlations or inefficient in reducing the model dimension. In the thesis, I propose a new embarrassingly parallel framework named {\em DECO} for distributed variable selection and parameter estimation. In {\em DECO}, variables are first partitioned and allocated to m distributed workers. The decorrelated subset data within each worker are then fitted via any algorithm designed for high-dimensional problems. We show that by incorporating the decorrelation step, DECO can achieve consistent variable selection and parameter estimation on each subset with (almost) no assumptions. In addition, the convergence rate is nearly minimax optimal for both sparse and weakly sparse models and does NOT depend on the partition number m. Extensive numerical experiments are provided to illustrate the performance of the new framework.

For datasets with both large sample sizes and high dimensionality, I propose a new "divided-and-conquer" framework {\em DEME} (DECO-message) by leveraging both the {\em DECO} and the {\em message} algorithm. The new framework first partitions the dataset in the sample space into row cubes using {\em message} and then partition the feature space of the cubes using {\em DECO}. This procedure is equivalent to partitioning the original data matrix into multiple small blocks, each with a feasible size that can be stored and fitted in a computer in parallel. The results are then synthezied via the {\em DECO} and {\em message} algorithm in a reverse order to produce the final output. The whole framework is extremely scalable.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Anomalous concentrations of Ir have been found in upper Eocene sediments from Ocean Drilling Program (ODP) Hole 1090B. Clear and dark-colored spherules that are believed to be microtektites and clinopyroxene-bearing microkrystites, respectively, were found in the samples with highest Ir. The peak Ir concentration in Sample 177-1090B-30X-5,105-106 (954 pg/g) and the net Ir fluence (14 ng/cm**2) at this site are higher than at most other localities except for Caribbean site RC9-58. The Ir anomaly and impact debris are probably correlative with similar deposits found at ODP Site 689 on the Maude Rise and at other localities around the world.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Background Many acute stroke trials have given neutral results. Sub-optimal statistical analyses may be failing to detect efficacy. Methods which take account of the ordinal nature of functional outcome data are more efficient. We compare sample size calculations for dichotomous and ordinal outcomes for use in stroke trials. Methods Data from stroke trials studying the effects of interventions known to positively or negatively alter functional outcome – Rankin Scale and Barthel Index – were assessed. Sample size was calculated using comparisons of proportions, means, medians (according to Payne), and ordinal data (according to Whitehead). The sample sizes gained from each method were compared using Friedman 2 way ANOVA. Results Fifty-five comparisons (54 173 patients) of active vs. control treatment were assessed. Estimated sample sizes differed significantly depending on the method of calculation (Po00001). The ordering of the methods showed that the ordinal method of Whitehead and comparison of means produced significantly lower sample sizes than the other methods. The ordinal data method on average reduced sample size by 28% (inter-quartile range 14–53%) compared with the comparison of proportions; however, a 22% increase in sample size was seen with the ordinal method for trials assessing thrombolysis. The comparison of medians method of Payne gave the largest sample sizes. Conclusions Choosing an ordinal rather than binary method of analysis allows most trials to be, on average, smaller by approximately 28% for a given statistical power. Smaller trial sample sizes may help by reducing time to completion, complexity, and financial expense. However, ordinal methods may not be optimal for interventions which both improve functional outcome

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Beech bark disease (BBD), a non-native association of the fungal pathogen Neonectria faginata and the beech scale insect Cryptococcus fagisuga, has dramatically affected American beech within North American forests. To monitor the spread and effects of BBD in Michigan, a network of forest health monitoring plots was established in 2001 following the disease discovery in Ludington State Park (Mason County). Forest health canopy condition and basic forestry measurements including basal area were reassessed on beech trees in these plots in 2011 and 2012. The influence of bark-inhabiting fungal endophytes on BBD resistance was investigated by collecting cambium tissue from apparently resistant and susceptible beech. Vigor rating showed significant influences of BBD in sample beech resulting in reduced health and substantiated by significant increases of dead beech basal area over time. C. fagisuga distribution was found to be spatially clustered and widespread in the 22 counties in Michigan's Lower Peninsula which contained monitoring plots. Neonectria has been found in Emmet, Cheboygan and Wexford in the Lower Peninsula which may coincide with additional BBD introduction locations. Surveys for BBD resistance resulted in five apparently resistant beech which were added to a BBD resistance database. The most frequently isolated endophytes from cambium tissue were identified by DNA sequencing primarily as Deuteromycetes and Ascomycetes including Chaetomium globosum, Neohendersonia kickxii and Fusarium flocciferum. N. faginata in antagonism trials showed significant growth reduction when paired with three beech fungal endophytes. The results of the antagonism trial and decay tests indicate that N. faginata may be a relatively poor competitor in vivo with limited ability to degrade cellulose.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Neonatal seizures are common in the neonatal intensive care unit. Clinicians treat these seizures with several anti-epileptic drugs (AEDs) to reduce seizures in a neonate. Current AEDs exhibit sub-optimal efficacy and several randomized control trials (RCT) of novel AEDs are planned. The aim of this study was to measure the influence of trial design on the required sample size of a RCT. We used seizure time courses from 41 term neonates with hypoxic ischaemic encephalopathy to build seizure treatment trial simulations. We used five outcome measures, three AED protocols, eight treatment delays from seizure onset (Td) and four levels of trial AED efficacy to simulate different RCTs. We performed power calculations for each RCT design and analysed the resultant sample size. We also assessed the rate of false positives, or placebo effect, in typical uncontrolled studies. We found that the false positive rate ranged from 5 to 85% of patients depending on RCT design. For controlled trials, the choice of outcome measure had the largest effect on sample size with median differences of 30.7 fold (IQR: 13.7–40.0) across a range of AED protocols, Td and trial AED efficacy (p<0.001). RCTs that compared the trial AED with positive controls required sample sizes with a median fold increase of 3.2 (IQR: 1.9–11.9; p<0.001). Delays in AED administration from seizure onset also increased the required sample size 2.1 fold (IQR: 1.7–2.9; p<0.001). Subgroup analysis showed that RCTs in neonates treated with hypothermia required a median fold increase in sample size of 2.6 (IQR: 2.4–3.0) compared to trials in normothermic neonates (p<0.001). These results show that RCT design has a profound influence on the required sample size. Trials that use a control group, appropriate outcome measure, and control for differences in Td between groups in analysis will be valid and minimise sample size.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The aim of this paper is to provide a contemporary summary of statistical and non-statistical meta-analytic procedures that have relevance to the type of experimental designs often used by sport scientists when examining differences/change in dependent measure(s) as a result of one or more independent manipulation(s). Using worked examples from studies on observational learning in the motor behaviour literature, we adopt a random effects model and give a detailed explanation of the statistical procedures for the three types of raw score difference-based analyses applicable to between-participant, within-participant, and mixed-participant designs. Major merits and concerns associated with these quantitative procedures are identified and agreed methods are reported for minimizing biased outcomes, such as those for dealing with multiple dependent measures from single studies, design variation across studies, different metrics (i.e. raw scores and difference scores), and variations in sample size. To complement the worked examples, we summarize the general considerations required when conducting and reporting a meta-analysis, including how to deal with publication bias, what information to present regarding the primary studies, and approaches for dealing with outliers. By bringing together these statistical and non-statistical meta-analytic procedures, we provide the tools required to clarify understanding of key concepts and principles.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper will investigate the suitability of existing performance measures under the assumption of a clearly defined benchmark. A range of measures are examined including the Sortino Ratio, the Sharpe Selection ratio (SSR), the Student’s t-test and a decay rate measure. A simulation study is used to assess the power and bias of these measures based on variations in sample size and mean performance of two simulated funds. The Sortino Ratio is found to be the superior performance measure exhibiting more power and less bias than the SSR when the distribution of excess returns are skewed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Early models of bankruptcy prediction employed financial ratios drawn from pre-bankruptcy financial statements and performed well both in-sample and out-of-sample. Since then there has been an ongoing effort in the literature to develop models with even greater predictive performance. A significant innovation in the literature was the introduction into bankruptcy prediction models of capital market data such as excess stock returns and stock return volatility, along with the application of the Black–Scholes–Merton option-pricing model. In this note, we test five key bankruptcy models from the literature using an upto- date data set and find that they each contain unique information regarding the probability of bankruptcy but that their performance varies over time. We build a new model comprising key variables from each of the five models and add a new variable that proxies for the degree of diversification within the firm. The degree of diversification is shown to be negatively associated with the risk of bankruptcy. This more general model outperforms the existing models in a variety of in-sample and out-of-sample tests.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Phospholipids are the key structural component of cell membranes, and recent advances in electrospray ionization mass spectrometry provide for the fast and efficient analysis of these compounds in biological extracts.1-3 The application of electrospray ionization tandem mass spectrometry (ESI-MS/MS) to phospholipid analysis has demonstrated several key advantages over the more traditional chromatographic methods, including speed and greater structural information.4 For example, the ESI-MS/MS spectrum of a typical phospholipidsparticularly in negative ion modesreadily identifies the carbon chain length and the degree of unsaturation of each of the fatty acids esterified to the parent molecule.5 A critical limitation of conventional ESI-MS/MS analysis, however, is the inability to uniquely identify the position of double bonds within the fatty acid chains. This is especially problematic given the importance of double bond position in determining the biological function of lipid classes.6 Previous attempts to identify double bond position in intact phospholipids using mass spectrometry employ either MS3 or offline chemical derivatization.7-11 The former method requires specialized instrumentation and is rarely applied, while the latter methods suffer from complications inherent in sample handling prior to analysis. In this communication we outline a novel on-line approach for the identification of double bond position in intact phospholipids. In our method, the double bond(s) present in unsaturated phospholipids are cleaved by ozonolysis within the ion source of a conventional ESI mass spectrometer to give two chemically induced fragment ions that may be used to unambiguously assign the position of the double bond. This is achieved by using oxygen as the electrospray nebulizing gas in combination with high electrospray voltages to initiate the formation of an ozoneproducing.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We report the tuning of oxygen content of La0.5Ca0.5MnO3-y and its effect on electrical transport and magnetic properties. A small reduction of oxygen content leads to a decrease in sample resistivity, which is more dramatic at low temperatures. No significant change is seen to occur in the magnetic properties for this case. Further reduction in the oxygen content increases the resistivity remarkably, as compared to the as-prepared sample. The amplitude of the ferromagnetic (FM) transition at 225 K decreases, and the antiferromagnetic (AFM) transition at 130 K disappears. For samples with y=0.17, insulator-metal transition and paramagnetic-ferromagnetic transition occur around 167 K. The results are explained in terms of the effect of oxygen vacancies on the coexistence of the metallic FM phase and the insulating charge ordered AFM phase.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Experiments were conducted to measure the ac breakdown strength of epoxy alumina nanocomposites with different filler loadings of 0.1, 1 and 5 wt%. The experiments were performed as per the ASTM D 149 standard on samples of thickness 0.5 mm, 1 mm and 3 mm in order to study the effect of thickness on the ac breakdown strength of epoxy nanocomposites. In the case of epoxy alumina nanocomposites it was observed that the ac breakdown strength was marginally lower for 0.1 wt% and 1 wt% filler loadings and then increased at 5 wt% filler loading as compared to the unfilled epoxy. The Weibull shape parameter (beta) increased with the addition of nanoparticles to epoxy as well as with the increasing sample thickness for all the filler loadings considered. DSC analysis was done to study the material properties at the filler resin interface in order to understand the effect of the filler loading and thereby the influence of the interface on the ac breakdown strength of epoxy nanocomposites. It was also observed that the decrease in ac electric breakdown strength with an increase in sample thickness follows an inverse power-law dependence. In addition, the ac breakdown strength of epoxy silica nanocomposites have also been studied in order to understand the influence of the filler type on the breakdown strength.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The interaction of guar gum with the hydrophobic solids namely talc, mica and graphite has been investigated through adsorption, electrokinetic and flotation experiments. The adsorption densities of guar gum onto the above hydrophobic minerals show that they are more or less independent of pH. The adsorption isotherms of guar gum onto talc, mica and graphite indicate that the adsorption densities increase with increase in guar gum concentration and all the isotherms follow the as L1 type according to Giles classification. The magnitude of the adsorption density of guar gum onto the above minerals may be arranged in the following sequence: talc > graphite > mica The effect of particle size on the adsorption density of guar gum onto these minerals has indicated that higher adsorption takes place in the coarser size fraction, consequent to an increase in the surface face-to-edge ratio. In the case of the talc and mica samples pretreated with EDTA and the leached graphite sample, a decrease in the adsorption density of guar gum is observed, due to a reduction in the metallic adsorption sites. The adsorption densities of guar gum increase with decrease in sample weight for all the three minerals. Electrokinetic measurements have indicated that the isoelectric points (iep) of these minerals lie between pH 2-3, Addition of guar gum decreases the negative electrophoretic mobility values in proportion to the guar gum concentration without any observable shift in the iep values, resembling the influence of an indifferent electrolyte. The flotation recovery is diminished in the presence of guar gum for all the three minerals, The magnitude of depression follows the same sequence as observed in the adsorption studies. The floatability of EDTA treated talc and mica samples as well as the leached graphite sample is enhanced, complementing the adsorption data, Possible mechanisms of interaction between the hydrophobic minerals and guar gum are discussed.