924 resultados para Optimizing time on-wing
Resumo:
This thesis reports a cross-national study carried out in England and India in an attempt to clarify the association of certain cultural and non-cultural characteristics with people's work-related attitudes and values, and with the structure of their work organizations. Three perspectives are considered to be relevant to the objectives of the study. The contingency perspective suggests that a 'fit' between an organization's context and its structural arrangements will be fundamentally necessary for achieving success and survival. The political economy perspective argues for the determining role of the social and economic structures within which the organization operates. The culturalist perspective looks to cultural attitudes and values of organizational members for an explanation for their organization's structure. The empirical investigation was carried out in three stages in each of the two countries involved by means of surveys of cultural attitudes, work-related attitudes and organizational structures and systems. The cultural surveys suggested that Indian and English people were different from one another with regard to fear of, and respect and obedience to, their seniors, ability to cope with ambiguity, honesty, independence, expression of emotions, fatalism, reserve, and care for others; they were similar with regard to tolerance, friendliness, attitude to change, attitude to law, self-control and self-confidence, and attitude to social differentiation. The second stage of the study, involving the employees of fourteen organizations, found that the English ones perceived themselves to have more power at work, expressed more tolerance for ambiguity, and had different expectations from their job than did the Indian equivalents. The two samples were similar with respect to commitment to their company and trust in their colleagues. The findings also suggested that employees' occupations, education and age had some influences on their work-related attitudes. The final stage of the research was a study of structures, control systems, and reward and punishment policies of the same fourteen organizations which were matched almost completely on their contextual factors across the two countries. English and Indian organizations were found to be similar in terms of centralization, specialization, chief executive's span of control, height and management control strategies. English organizations, however, were far more formalized, spent more time on consultation and their managers delegated authority lower down the hierarchy than Indian organizations. The major finding of the study was the multiple association that cultural, national and contingency factors had with the structural characteristics of the organizations and with the work-related attitudes of their members. On the basis of this finding, a multi-perspective model for understanding organizational structures and systems is proposed in which the contributions made by contingency, political economy and cultural perspectives are recognized and incorporated.
Resumo:
This thesis examines the ways that libraries have employed computers to assist with housekeeping operations. It considers the relevance of such applications to company libraries in the construction industry, and describes more specifically the development of an integrated cataloguing and loan system. A review of the main features in the development of computerised ordering, cataloguing and circulation control systems shows that fully integrated packages are beginning to be completed, and that some libraries are introducing second generation programs. Cataloguing is the most common activity to be computerised, both at national and company level. Results from a sample of libraries in the construction industry suggest that the only computerised housekeeping system is at Taylor Woodrow. Most of the firms have access to an in-house computer, and some of the libraries, particularly those in firms of consulting engineers, might benefit from computerisation, but there are differing attitudes amongst the librarians towards the computer. A detailed study of the library at Taylor Woodrow resulted in a feasibility report covering all the areas of its activities. One of the main suggestions was the possible use of a computerised loans and cataloguing system. An integrated system to cover these two areas was programmed in Fortran and implemented. This new system provides certain benefits and saves staff time, but at the cost of time on the computer. Some improvements could be made by reprogramming, but it provides a general system for small technical libraries. A general equation comparing costs for manual and computerised operations is progressively simplified to a form where the annual saving from the computerised system is expressed in terms of staff and computer costs and the size of the library. This equation gives any library an indication of the savings or extra cost which would result from using the computerised system.
Resumo:
In this paper we investigate rate adaptation algorithm SampleRate, which spends a fixed time on bit-rates other than the currently measured best bit-rate. A simple but effective analytic model is proposed to study the steady-state behavior of the algorithm. Impacts of link condition, channel congestion and multi-rate retry on the algorithm performance are modeled. Simulations validate the model. It is also observed there is still a large performance gap between SampleRate and optimal scheme in case of high frame collision probability.
Resumo:
We report for the first time on the limitations in the operational power range of network traffic in the presence of heterogeneous 28-Gbaud polarization-multiplexed quadrature amplitude modulation (PM-mQAM) channels in a nine-channel dynamic optical mesh network. In particular, we demonstrate that transponders which autonomously select a modulation order and launch power to optimize their own performance will have a severe impact on copropagating network traffic. Our results also suggest that altruistic transponder operation may offer even lower penalties than fixed launch power operation.
Resumo:
We report for the first time on the limitations in the operational power range of few-mode fiber based transmission systems, employing 28Gbaud quadrature phase shift keying transponders, over 1,600km. It is demonstrated that if an additional mode is used on a preexisting few-mode transmission link, and allowed to optimize its performance, it will have a significant impact on the pre-existing mode. In particular, we show that for low mode coupling strengths (weak coupling regime), the newly added variable power mode does not considerably impact the fixed power existing mode, with performance penalties less than 2dB (in Q-factor). On the other hand, as mode coupling strength is increased (strong coupling regime), the individual launch power optimization significantly degrades the system performance, with penalties up to ∼6dB. Our results further suggest that mutual power optimization, of both fixed power and variable power modes, reduces power allocation related penalties to less than 3dB, for any given coupling strength, for both high and low differential mode delays. © 2013 Optical Society of America.
Resumo:
Particle breakage due to fluid flow through various geometries can have a major influence on the performance of particle/fluid processes and on the product quality characteristics of particle/fluid products. In this study, whey protein precipitate dispersions were used as a case study to investigate the effect of flow intensity and exposure time on the breakage of these precipitate particles. Computational fluid dynamic (CFD) simulations were performed to evaluate the turbulent eddy dissipation rate (TED) and associated exposure time along various flow geometries. The focus of this work is on the predictive modelling of particle breakage in particle/fluid systems. A number of breakage models were developed to relate TED and exposure time to particle breakage. The suitability of these breakage models was evaluated for their ability to predict the experimentally determined breakage of the whey protein precipitate particles. A "power-law threshold" breakage model was found to provide a satisfactory capability for predicting the breakage of the whey protein precipitate particles. The whey protein precipitate dispersions were propelled through a number of different geometries such as bends, tees and elbows, and the model accurately predicted the mean particle size attained after flow through these geometries. © 2005 Elsevier Ltd. All rights reserved.
Resumo:
In this paper the network problem of determining all-pairs shortest-path is examined. A distributed algorithm which runs in O(n) time on a network of n nodes is presented. The number of messages of the algorithm is O(e+n log n) where e is the number of communication links of the network. We prove that this algorithm is time optimal.
Resumo:
Fluoroscopic images exhibit severe signal-dependent quantum noise, due to the reduced X-ray dose involved in image formation, that is generally modelled as Poisson-distributed. However, image gray-level transformations, commonly applied by fluoroscopic device to enhance contrast, modify the noise statistics and the relationship between image noise variance and expected pixel intensity. Image denoising is essential to improve quality of fluoroscopic images and their clinical information content. Simple average filters are commonly employed in real-time processing, but they tend to blur edges and details. An extensive comparison of advanced denoising algorithms specifically designed for both signal-dependent noise (AAS, BM3Dc, HHM, TLS) and independent additive noise (AV, BM3D, K-SVD) was presented. Simulated test images degraded by various levels of Poisson quantum noise and real clinical fluoroscopic images were considered. Typical gray-level transformations (e.g. white compression) were also applied in order to evaluate their effect on the denoising algorithms. Performances of the algorithms were evaluated in terms of peak-signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR), mean square error (MSE), structural similarity index (SSIM) and computational time. On average, the filters designed for signal-dependent noise provided better image restorations than those assuming additive white Gaussian noise (AWGN). Collaborative denoising strategy was found to be the most effective in denoising of both simulated and real data, also in the presence of image gray-level transformations. White compression, by inherently reducing the greater noise variance of brighter pixels, appeared to support denoising algorithms in performing more effectively. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
Drastic improvements in styrene yield and selectivity were achieved in the oxidative dehydrogenation of ethylbenzene by staged feeding of O2. Six isothermal packed bed reactors were used in series with intermediate feeding of O2, while all EB was fed to the first reactor, diluted with helium or CO2 (1:5 molar ratio), resulting in total O2:EB molar feed ratios of 0.2-0.6. The two catalyst samples, γ-Al 2O3 and 5P/SiO2, that were applied both benefitted from this operation mode. The ethylbenzene conversion per stage and the selectivity to styrene were significantly improved. The production of COX was effectively reduced, while the selectivity to other side products remained unchanged. Compared with co-feeding at a total O 2:EB molar feed ratio of 0.6, by staged feeding the EB conversion (+15% points for both catalysts), ST selectivity (+4% points for both samples) and O2 (ST) selectivity (+9% points for γ-Al2O 3 and +17% points for 5P/SiO2) all improved. The ethylbenzene conversion over 5P/SiO2 can be increased from 18% to 70% by increasing the number of reactors from 1 to 6 with each reactor a total amount of O2 of 0.1 without the loss of ST selectivity (93%). For 5P/SiO2 a higher temperature (500 C vs. 450 C for Al 2O3) is required. Essentially more catalyst (5P/SiO 2) was required to achieve full O2 conversion in each reactor. Staged feeding of O2 does not eliminate the existing issues of the catalyst stability both in time-on stream and as a function of the number of catalyst regenerations (5P/SiO2), or the relatively moderate performance (relatively low styrene selectivity for γ-Al2O 3). © 2014 Elsevier B.V.
Resumo:
A packed bed microbalance reactor setup (TEOM-GC) is used to investigate the formation of coke as a function of time-on-stream on γ-Al2O3 and 3P/SiO2 catalyst samples under different conditions for the ODH reaction of ethylbenzene to styrene. All samples show a linear correlation of the styrene selectivity and yield with the initial coverage of coke. The COX production increases with the coverage of coke. On the 3 wt% P/SiO2 sample, the initial coke build-up is slow and the coke deposition rate increases with time. On alumina-based catalyst samples, a fast initial coke build-up takes place, decreasing with time-on-stream, but the amount of coke does not stabilize. A higher O2 : EB feed ratio results in more coke, and a higher temperature results in less coke. This coking behaviour of Al2O3 can be described by existing "monolayer-multilayer" models. Further, the coverage of coke on the catalyst varies with the position in the bed. For maximal styrene selectivity, the optimal coverage of coke should be sufficient to convert all O2, but as low as possible to prevent selectivity loss by COX production. This is in favour of high temperature and low O2 : EB feed ratios. The optimal coke coverage depends in a complex way on all the parameters: temperature, the O2 : EB feed ratio, reactant concentrations, and the type of starting material. This journal is
Resumo:
Commercially available γ-Al2O3 was calcined at temperatures between 500 and 1200 °C and tested for its performance in the oxidative ethylbenzene dehydrogenation (ODH) over a wide range of industrially-relevant conditions. The original γ-Al2O 3, as well as η- and α-Al2O3, were tested. A calcination temperature around 1000/1050 °C turned out to be optimal for the ODH performance. Upon calcination the number of acid sites (from 637 to 436 μmol g-1) and surface area (from 272 to 119 m 2 g-1) decrease, whereas the acid site density increases (from 1.4 to 2.4 sites per nm2). Less coke, being the active catalyst, is formed during ODH on the Al-1000 sample compared to γ-Al 2O3 (30.8 wt% vs. 21.6 wt%), but the coke surface coverage increases. Compared with γ-Al2O3, the EB conversion increased from 36% to 42% and the ST selectivity increased from 83% to 87%. For an optimal ST selectivity the catalyst should contain enough coke to attain full conversion of the limiting reactant oxygen. The reactivity of the coke is changed due to the higher density and strength of the Lewis acid sites that are formed by the high temperature calcination. The Al-1000 sample and all other investigated catalysts lost ODH activity with time on stream. The loss of selectivity towards more COX formation is directly correlated with the amount of coke. © The Royal Society of Chemistry 2013.
Resumo:
Cohort programs have been instituted at many universities to accommodate the growing number of mature adult graduate students who pursue degrees while maintaining multiple commitments such as work and family. While it is estimated that as many as 40–60% of students who begin graduate study fail to complete degrees, it is thought that attrition may be even higher for this population of students. Yet, little is known about the impact of cohorts on the learning environment and whether cohort programs affect graduate student retention. Retention theory stresses the importance of the academic department, quality of faculty-student relationships and student involvement in the life of the academic community as critical determinants in students' decisions to persist to degree completion. However, students who are employed full-time typically spend little time on campus engaged in the learning environment. Using academic and social integration theory, this study examined the experiences of working adult graduate students enrolled in cohort (CEP) and non-cohort (non-CEP) programs and the influence of these experiences on intention to persist. The Graduate Program Context Questionnaire was administered to graduate students (N = 310) to examine measures of academic and social integration and intention to persist. Sample t tests and ANOVAs were conducted to determine whether differences in perceptions could be identified between cohort and non-cohort students. Multiple linear regression was used to identify variables that predict students' intention to persist. While there were many similarities, significant differences were found between CEP and non-CEP student groups on two measures. CEP students rated peer-student relationships higher and scored higher on the intention to persist measure than non-CEP students. The psychological integration measure, however, was the strongest predictor of intention to persist for both the CEP and non-CEP groups. This study supports the research literature which suggests that CEP programs encourage the development of peer-student relationships and promote students' commitment to persistence.
Resumo:
Arsenic has been classified as a group I carcinogen. It has been ranked number one in the CERCLA priority list of hazardous substances due to its frequency, toxicity and potential for human exposure. Paradoxically, arsenic has been employed as a successful chemotherapeutic agent for acute promyelocytic leukemia and has found some success in multiple myeloma. Since arsenic toxicity and efficacy is species dependent, a speciation method, based on the complementary use of reverse phase and cation exchange chromatography, was developed. Inductively coupled plasma mass spectrometer (ICP-MS), as an element specific detector, and electrospray ionization mass spectrometer (ESI-MS), as a molecule specific detector, were employed. Low detection limits in the µg. L−1 range on the ICP-MS and mg. L−1 range on the ESI-MS were obtained. The developed methods were validated against each other through the use of a Deming plot. With the developed speciation method, the effects of both pH on the stability of As species and reduced glutathione (GSH) concentration on the formation and stability of arsenic glutathione complexes were studied. To identify arsenicals in multiple myeloma (MM) cell lines post arsenic trioxide (ATO) and darinaparsin (DAR) incubation, an extraction method based on the use of ultrasonic probe was developed. Extraction tools and solvents were evaluated and the effect of GSH concentration on the quantitation of arsenic glutathione (As-GSH) complexes in MM cell extracts was studied. The developed method was employed for the identification of metabolites in DAR incubated cell lines where the effect of extraction pH, DAR incubation concentration and incubation time on the relative distribution of the As metabolites was assessed. A new arsenic species, dimethyarsinothioyl glutathione (DMMTA V-GS), a pentavalent thiolated arsenical, was identified in the cell extracts through the use of liquid chromatography tandem mass spectrometry. The formation of the new metabolite in the extracts was dependent on the decomposition of s-dimethylarsino glutathione (DMA(GS)). These results have major implications in both the medical and toxicological fields of As because they involve the metabolism of a chemotherapeutic agent and the role sulfur compounds play in this mechanism.
Resumo:
Many students are entering colleges and universities in the United States underprepared in mathematics. National statistics indicate that only approximately one-third of students in developmental mathematics courses pass. When underprepared students repeatedly enroll in courses that do not count toward their degree, it costs them money and delays graduation. This study investigated a possible solution to this problem: Whether using a particular computer assisted learning strategy combined with using mastery learning techniques improved the overall performance of students in a developmental mathematics course. Participants received one of three teaching strategies: (a) group A was taught using traditional instruction with mastery learning supplemented with computer assisted instruction, (b) group B was taught using traditional instruction supplemented with computer assisted instruction in the absence of mastery learning and, (c) group C was taught using traditional instruction without mastery learning or computer assisted instruction. Participants were students in MAT1033, a developmental mathematics course at a large public 4-year college. An analysis of covariance using participants' pretest scores as the covariate tested the null hypothesis that there was no significant difference in the adjusted mean final examination scores among the three groups. Group A participants had significantly higher adjusted mean posttest score than did group C participants. A chi-square test tested the null hypothesis that there were no significant differences in the proportions of students who passed MAT1033 among the treatment groups. It was found that there was a significant difference in the proportion of students who passed among all three groups, with those in group A having the highest pass rate and those in group C the lowest. A discriminant factor analysis revealed that time on task correctly predicted the passing status of 89% of the participants. ^ It was concluded that the most efficacious strategy for teaching developmental mathematics was through the use of mastery learning supplemented by computer-assisted instruction. In addition, it was noted that time on task was a strong predictor of academic success over and above the predictive ability of a measure of previous knowledge of mathematics.^
Resumo:
More information is now readily available to computer users than at any time in human history; however, much of this information is often inaccessible to people with blindness or low-vision, for whom information must be presented non-visually. Currently, screen readers are able to verbalize on-screen text using text-to-speech (TTS) synthesis; however, much of this vocalization is inadequate for browsing the Internet. An auditory interface that incorporates auditory-spatial orientation was created and tested. For information that can be structured as a two-dimensional table, links can be semantically grouped as cells in a row within an auditory table, which provides a consistent structure for auditory navigation. An auditory display prototype was tested.^ Sixteen legally blind subjects participated in this research study. Results demonstrated that stereo panning was an effective technique for audio-spatially orienting non-visual navigation in a five-row, six-column HTML table as compared to a centered, stationary synthesized voice. These results were based on measuring the time- to-target (TTT), or the amount of time elapsed from the first prompting to the selection of each tabular link. Preliminary analysis of the TTT values recorded during the experiment showed that the populations did not conform to the ANOVA requirements of normality and equality of variances. Therefore, the data were transformed using the natural logarithm. The repeated-measures two-factor ANOVA results show that the logarithmically-transformed TTTs were significantly affected by the tonal variation method, F(1,15) = 6.194, p= 0.025. Similarly, the results show that the logarithmically transformed TTTs were marginally affected by the stereo spatialization method, F(1,15) = 4.240, p=0.057. The results show that the logarithmically transformed TTTs were not significantly affected by the interaction of both methods, F(1,15) = 1.381, p=0.258. These results suggest that some confusion may be caused in the subject when employing both of these methods simultaneously. The significant effect of tonal variation indicates that the effect is actually increasing the average TTT. In other words, the presence of preceding tones increases task completion time on average. The marginally-significant effect of stereo spatialization decreases the average log(TTT) from 2.405 to 2.264.^