880 resultados para enhancement of performance
Resumo:
This report is one of two products for this project with the other being a design guide. This report describes test results and comparative analysis from 16 different portland cement concrete (PCC) pavement sites on local city and county roads in Iowa. At each site the surface conditions of the pavement (i.e., crack survey) and foundation layer strength, stiffness, and hydraulic conductivity properties were documented. The field test results were used to calculate in situ parameters used in pavement design per SUDAS and AASHTO (1993) design methodologies. Overall, the results of this study demonstrate how in situ and lab testing can be used to assess the support conditions and design values for pavement foundation layers and how the measurements compare to the assumed design values. The measurements show that in Iowa, a wide range of pavement conditions and foundation layer support values exist. The calculated design input values for the test sites (modulus of subgrade reaction, coefficient of drainage, and loss of support) were found to be different than typically assumed. This finding was true for the full range of materials tested. The findings of this study support the recommendation to incorporate field testing as part of the process to field verify pavement design values and to consider the foundation as a design element in the pavement system. Recommendations are provided in the form of a simple matrix for alternative foundation treatment options if the existing foundation materials do not meet the design intent. The PCI prediction model developed from multi-variate analysis in this study demonstrated a link between pavement foundation conditions and PCI. The model analysis shows that by measuring properties of the pavement foundation, the engineer will be able to predict long term performance with higher reliability than by considering age alone. This prediction can be used as motivation to then control the engineering properties of the pavement foundation for new or re-constructed PCC pavements to achieve some desired level of performance (i.e., PCI) with time.
Resumo:
The work described in this report documents the activities performed for the evaluation, development, and enhancement of the Iowa Department of Transportation (DOT) pavement condition information as part of their pavement management system operation. The study covers all of the Iowa DOT’s interstate and primary National Highway System (NHS) and non-NHS system. A new pavement condition rating system that provides a consistent, unified approach in rating pavements in Iowa is being proposed. The proposed 100-scale system is based on five individual indices derived from specific distress data and pavement properties, and an overall pavement condition index, PCI-2, that combines individual indices using weighting factors. The different indices cover cracking, ride, rutting, faulting, and friction. The Cracking Index is formed by combining cracking data (transverse, longitudinal, wheel-path, and alligator cracking indices). Ride, rutting, and faulting indices utilize the International Roughness Index (IRI), rut depth, and fault height, respectively.
Resumo:
The state of the art to describe image quality in medical imaging is to assess the performance of an observer conducting a task of clinical interest. This can be done by using a model observer leading to a figure of merit such as the signal-to-noise ratio (SNR). Using the non-prewhitening (NPW) model observer, we objectively characterised the evolution of its figure of merit in various acquisition conditions. The NPW model observer usually requires the use of the modulation transfer function (MTF) as well as noise power spectra. However, although the computation of the MTF poses no problem when dealing with the traditional filtered back-projection (FBP) algorithm, this is not the case when using iterative reconstruction (IR) algorithms, such as adaptive statistical iterative reconstruction (ASIR) or model-based iterative reconstruction (MBIR). Given that the target transfer function (TTF) had already shown it could accurately express the system resolution even with non-linear algorithms, we decided to tune the NPW model observer, replacing the standard MTF by the TTF. It was estimated using a custom-made phantom containing cylindrical inserts surrounded by water. The contrast differences between the inserts and water were plotted for each acquisition condition. Then, mathematical transformations were performed leading to the TTF. As expected, the first results showed a dependency of the image contrast and noise levels on the TTF for both ASIR and MBIR. Moreover, FBP also proved to be dependent of the contrast and noise when using the lung kernel. Those results were then introduced in the NPW model observer. We observed an enhancement of SNR every time we switched from FBP to ASIR to MBIR. IR algorithms greatly improve image quality, especially in low-dose conditions. Based on our results, the use of MBIR could lead to further dose reduction in several clinical applications.
Resumo:
Joints are always a concern in the construction and long-term performance of concrete pavements. Research has shown that we need some type of positive load transfer across transverse joints. The same research has directed pavement designers to use round dowels spaced at regular intervals across the transverse joint to distribute the vehicle loads both longitudinally and transversely across the joint. The goal is to reduce bearing stresses on the dowels and the two pavement slab edges and erosion of the underlying surface, hence improved long-term joint and pavement structure performance. Road salts cause metal corrosion in doweled joints, excessive bearing stresses hollow dowel ends, and construction processes are associated with cracking pavement at the end of dowels. Dowels are also a cost factor in the pavement costs when joint spacing is reduced to control curling and warping distress in pavements. Designers desire to place adequate numbers of dowels spaced at the proper locations to handle the anticipated loads and bearing stresses for the design life of the pavement. This interim report is the second of three reports on the evaluation of elliptical steel dowels. This report consists of an update on the testing and performance of the various shapes and sizes of dowels. It also documents the results of the first series of performance surveys and draws interim conclusions about the performance of various bar shapes, sizes, spacings, and basket configurations. In addition to the study of elliptical steel dowel performance, fiber reinforced polymers (FRP) are also tested as elliptical dowel material (in contrast to steel) on a section of the highway construction north of the elliptical steel test sections.
Resumo:
New methods and devices for pursuing performance enhancement through altitude training were developed in Scandinavia and the USA in the early 1990s. At present, several forms of hypoxic training and/or altitude exposure exist: traditional 'live high-train high' (LHTH), contemporary 'live high-train low' (LHTL), intermittent hypoxic exposure during rest (IHE) and intermittent hypoxic exposure during continuous session (IHT). Although substantial differences exist between these methods of hypoxic training and/or exposure, all have the same goal: to induce an improvement in athletic performance at sea level. They are also used for preparation for competition at altitude and/or for the acclimatization of mountaineers. The underlying mechanisms behind the effects of hypoxic training are widely debated. Although the popular view is that altitude training may lead to an increase in haematological capacity, this may not be the main, or the only, factor involved in the improvement of performance. Other central (such as ventilatory, haemodynamic or neural adaptation) or peripheral (such as muscle buffering capacity or economy) factors play an important role. LHTL was shown to be an efficient method. The optimal altitude for living high has been defined as being 2200-2500 m to provide an optimal erythropoietic effect and up to 3100 m for non-haematological parameters. The optimal duration at altitude appears to be 4 weeks for inducing accelerated erythropoiesis whereas <3 weeks (i.e. 18 days) are long enough for beneficial changes in economy, muscle buffering capacity, the hypoxic ventilatory response or Na(+)/K(+)-ATPase activity. One critical point is the daily dose of altitude. A natural altitude of 2500 m for 20-22 h/day (in fact, travelling down to the valley only for training) appears sufficient to increase erythropoiesis and improve sea-level performance. 'Longer is better' as regards haematological changes since additional benefits have been shown as hypoxic exposure increases beyond 16 h/day. The minimum daily dose for stimulating erythropoiesis seems to be 12 h/day. For non-haematological changes, the implementation of a much shorter duration of exposure seems possible. Athletes could take advantage of IHT, which seems more beneficial than IHE in performance enhancement. The intensity of hypoxic exercise might play a role on adaptations at the molecular level in skeletal muscle tissue. There is clear evidence that intense exercise at high altitude stimulates to a greater extent muscle adaptations for both aerobic and anaerobic exercises and limits the decrease in power. So although IHT induces no increase in VO(2max) due to the low 'altitude dose', improvement in athletic performance is likely to happen with high-intensity exercise (i.e. above the ventilatory threshold) due to an increase in mitochondrial efficiency and pH/lactate regulation. We propose a new combination of hypoxic method (which we suggest naming Living High-Training Low and High, interspersed; LHTLHi) combining LHTL (five nights at 3000 m and two nights at sea level) with training at sea level except for a few (2.3 per week) IHT sessions of supra-threshold training. This review also provides a rationale on how to combine the different hypoxic methods and suggests advances in both their implementation and their periodization during the yearly training programme of athletes competing in endurance, glycolytic or intermittent sports.
Resumo:
In this paper, the theory of hidden Markov models (HMM) isapplied to the problem of blind (without training sequences) channel estimationand data detection. Within a HMM framework, the Baum–Welch(BW) identification algorithm is frequently used to find out maximum-likelihood (ML) estimates of the corresponding model. However, such a procedureassumes the model (i.e., the channel response) to be static throughoutthe observation sequence. By means of introducing a parametric model fortime-varying channel responses, a version of the algorithm, which is moreappropriate for mobile channels [time-dependent Baum-Welch (TDBW)] isderived. Aiming to compare algorithm behavior, a set of computer simulationsfor a GSM scenario is provided. Results indicate that, in comparisonto other Baum–Welch (BW) versions of the algorithm, the TDBW approachattains a remarkable enhancement in performance. For that purpose, onlya moderate increase in computational complexity is needed.
Resumo:
Monimutkaisen tietokonejärjestelmän suorituskykyoptimointi edellyttää järjestelmän ajonaikaisen käyttäytymisen ymmärtämistä. Ohjelmiston koon ja monimutkaisuuden kasvun myötä suorituskykyoptimointi tulee yhä tärkeämmäksi osaksi tuotekehitysprosessia. Tehokkaampien prosessorien käytön myötä myös energiankulutus ja lämmöntuotto ovat nousseet yhä suuremmiksi ongelmiksi, erityisesti pienissä, kannettavissa laitteissa. Lämpö- ja energiaongelmien rajoittamiseksi on kehitetty suorituskyvyn skaalausmenetelmiä, jotka edelleen lisäävät järjestelmän kompleksisuutta ja suorituskykyoptimoinnin tarvetta. Tässä työssä kehitettiin visualisointi- ja analysointityökalu ajonaikaisen käyttäytymisen ymmärtämisen helpottamiseksi. Lisäksi kehitettiin suorituskyvyn mitta, joka mahdollistaa erilaisten skaalausmenetelmien vertailun ja arvioimisen suoritusympäristöstä riippumatta, perustuen joko suoritustallenteen tai teoreettiseen analyysiin. Työkalu esittää ajonaikaisesti kerätyn tallenteen helposti ymmärrettävällä tavalla. Se näyttää mm. prosessit, prosessorikuorman, skaalausmenetelmien toiminnan sekä energiankulutuksen kolmiulotteista grafiikkaa käyttäen. Työkalu tuottaa myös käyttäjän valitsemasta osasta suorituskuvaa numeerista tietoa, joka sisältää useita oleellisia suorituskykyarvoja ja tilastotietoa. Työkalun sovellettavuutta tarkasteltiin todellisesta laitteesta saatua suoritustallennetta sekä suorituskyvyn skaalauksen simulointia analysoimalla. Skaalausmekanismin parametrien vaikutus simuloidun laitteen suorituskykyyn analysoitiin.
Resumo:
To achieve success in a constantly changing environment and with ever-increasing competition, companies must develop their operations continuously. To do this, they must have a clear vision of what they want to be in the future. This vision can be attained through careful planning and strategising. One method of transforming a strategy and vision into an everyday tool used by employees is the use of a balanced performance measurement system. The importance of performance measurement in the implementation of companies' visions and strategies has grown substantially in the last ten years. Measures are derived from the company's critical success factors and from many different perspectives. There are three time dimensions: past, present and future. Many such performance measurement systems have been created since the 1990s. This is a case study whose main objective is to provide a recommendation for how the case company could make use of performance measurement to support strategic management. To answer this question, the study uses literature-based research and empirical research at the case company's premises. The theoretical part of the study consists of two sections: introducing the Balanced Scorecard and discussing how it supports strategic management and change management. The empirical part of this study determines the company's present performance measurement situation through interviews in the company. The study resulted in a recommendation to the company to start developing the Balanced Scorecard system. By setting up this kind process, the company would be able to change its focus more towards the future, beginning to implement a more process-based organisation and getting its employees to work together towards common goals.
Resumo:
This thesis investigates performance persistence among the equity funds investing in Russia during 2003-2007. Fund performance is measured using several methods including the Jensen alpha, the Fama-French 3- factor alpha, the Sharpe ratio and two of its variations. Moreover, we apply the Bayesian shrinkage estimation in performance measurement and evaluate its usefulness compared with the OLS 3-factor alphas. The pattern of performance persistence is analyzed using the Spearman rank correlation test, cross-sectional regression analysis and stacked return time series. Empirical results indicate that the Bayesian shrinkage estimates may provide better and more accurate estimates of fund performance compared with the OLS 3-factor alphas. Secondly, based on the results it seems that the degree of performance persistence is strongly related to length of the observation period. For the full sample period the results show strong signs of performance reversal whereas for the subperiod analysis the results indicate performance persistence during the most recent years.
Resumo:
A novel cantilever pressure sensor was developed in the Department of Physics at the University of Turku in order to solve the sensitivity problems which are encountered when condenser microphones are used in photoacoustic spectroscopy. The cantilever pressure sensor, combined with a laser interferometer for the measurement of the cantilever movements, proved to be highly sensitive. The original aim of this work was to integrate the sensor in a photoacoustic gas detector working in a differential measurement scheme. The integration was made successfully into three prototypes. In addition, the cantilever was also integrated in the photoacoustic FTIR measurement schemes of gas-, liquid-, and solid-phase samples. A theoretical model for the signal generation in each measurement scheme was created and the optimal celldesign discussed. The sensitivity and selectivity of the differential method were evaluated when a blackbody radiator and a mechanical chopper were used with CO2, CH4, CO, and C2H4 gases. The detection limits were in the sub-ppm level for all four gases with only a 1.3 second integration time and the cross interference was well below one percent for all gas combinations other than those between hydrocarbons. Sensitivity with other infrared sources was compared using ethylene as an example gas. In the comparison of sensitivity with different infrared sources the electrically modulated blackbody radiator gave a 35 times higher and the CO2-laser a 100 times lower detection limit than the blackbody radiator with a mechanical chopper. As a conclusion, the differential system is well suited to rapid single gas measurements. Gas-phase photoacoustic FTIR spectroscopy gives the best performance, when several components have to be analyzed simultaneously from multicomponent samples. Multicomponent measurements were demonstrated with a sample that contained different concentrations of CO2, H2O, CO, and four different hydrocarbons. It required an approximately 10 times longer measurement time to achieve the same detection limit for a single gas as with the differential system. The properties of the photoacoustic FTIR spectroscopy were also compared to conventional transmission FTIR spectroscopy by simulations. Solid- and liquid-phase photoacoustic FTIR spectroscopy has several advantages compared to other techniques and therefore it also has a great variety of applications. A comparison of the signal-to-noise ratio between photoacoustic cells with a cantilever microphone and a condenser microphone was done with standard carbon black, polyethene, and sunflower oil samples. The cell with the cantilever microphone proved to have a 5-10 times higher signal-to-noise ratio than the reference detector, depending on the sample. Cantilever enhanced photoacoustics will be an effective tool for gas detection and analysis of solid- and liquid-phase samples. The preliminary prototypes gave good results in all three measurement schemes that were studied. According to simulations, there are possibilities for further enhancement of the sensitivity, as well as other properties, of each system.
Resumo:
The objective of the present study was to evaluate the anticoccidial effect of the different concentrations of the acetic acid in the broiler chickens in comparison with the amprolium anticoccidial. A total of 198 chicks were placed 11 per pen with three pens per treatment. The different concentrations (1%, 2% and 3%) of acetic acid and amproilum (at the dose rate of 125ppm) were given to the experimental groups in drinking water from 10-19th days of age. One group was kept as infected non medicated control and one as non infected non medicated control. All the groups were inoculated orally with 75,000 sporulated oocysts at the 12th day of age except non infected non medicated control. Anticoccidial effect was evaluated on the basis of performance (weight gain, feed conversion ratio) and pathogenic (oocyst score, lesion score and mortality %age) parameters. Among acetic acid medicated groups, the maximum anticoccidial effect was seen in the group medicated with 3% acetic acid followed by 2% and 1% acetic acid medicated groups. Amprolium and 3% acetic acid were almost equivalent in suppressing the negative performance and pathogenic effects associated with coccidiosis (Eimeria tenella) challenge. In summary, acetic acid has the potential to be used as alternative to chemotherapeutic drugs for Eimeria tenella control. Concentration-dependent anticoccidial effect of acetic acid suggests that further studies should be carried out to determine the possible maximum safe levels of acetic acid with least toxic effects to be used as anticoccidial.
Resumo:
The thesis examines the risk-adjusted performance of European small cap equity funds between 2008 and 2013. The performance is measured using several measures including Sharpe ratio, Treynor ratio, Modigliani measure, Jensen alpha, 3-factor alpha and 4-factor alpha. The thesis also addresses the issue of persistence in mutual fund performance. Thirdly, the relationship between the activity of fund managers and fund performance is investigated. The managerial activity is measured using tracking error and R-squared obtained from a 4-factor asset pricing model. The issues are investigated using Spearman rank correlation test, cross-sectional regression analysis and ranked portfolio tests. Monthly return data was provided by Morningstar and consists of 88 mutual funds. Results show that small cap funds earn back a significant amount of their expenses, but on average loose to their benchmark index. The evidence of performance persistence over 12-month time period is weak. Managerial activity is shown to positively contribute to fund performance
Resumo:
The purpose of the study was two-fold; first, the association between interpersonal coaching styles and self-determined motivation was examined, followed by the investigation of the motivation-performance relationship. Participants included 221 female Canadian Interuniversity Sport (CIS) rugby players, aged sixteen to thirty-three (M= 20.1: SD = 2.26), who reported the number of years they played CIS rugby (M= 2.3: SD = 1.37) and organized rugby (M= 5.9: SD = 2.31). Multiple and bivariate regressions were employed with autonomy-support, structure, and involvement accounting for 17%, 41 % and 22% of the variance of competence, autonomy and relatedness. The three basic needs accounted for 40% of the variance of motivation, and motivation accounted for 2% of the variance of athletes' perceptions of performance. Findings indicated that autonomy-support emerged as a predictor of all three basic needs, involvement predicted relatedness and competence, autonomy predicted motivation, and motivation predicted athletes' perception of performance.
Resumo:
Research has shown a consistent correlation between efficacy and sport performance (Moritz, et aI., 2000). This relationship has been shown to be dynamic and reciprocal over seasons (e.g., Myers, Payment, et aI., 2004), within games (e.g., Butt, et aI., 2003), and across trials (e.g., Feltz, 1982). The purpose of the present study was to examine selfefficacy and performance simultaneously within one continuous routine. Forty-seven undergraduate students performed a gymnastic sequence while using an efficacy measure. Results indicated that the efficacy-performance relationship was not reciprocal; previous performance was a significant predictor of subsequent performance (p < .01; f3s ranged from .44 to .67). Results further revealed significant differences in efficacy beliefs between groups with high and low levels of performance [F (1,571) = 7.16,p < .01]. Findings suggest that high levels of performance within a continuous physical activity task result in higher performance scores and higher efficacy beliefs.
Resumo:
Emerging markets have received wide attention from investors around the globe because of their return potential and risk diversification. This research examines the selection and timing performance of Canadian mutual funds which invest in fixed-income and equity securities in emerging markets. We use (un)conditional two- and five-factor benchmark models that accommodate the dynamics of returns in emerging markets. We also adopt the cross-sectional bootstrap methodology to distinguish between ‘skill’ and ‘luck’ for individual funds. All the tests are conducted using a comprehensive data set of bond and equity emerging funds over the period of 1989-2011. The risk-adjusted measures of performance are estimated using the least squares method with the Newey-West adjustment for standard errors that are robust to conditional heteroskedasticity and autocorrelation. The performance statistics of the emerging funds before (after) management-related costs are insignificantly positive (significantly negative). They are sensitive to the chosen benchmark model and conditional information improves selection performance. The timing statistics are largely insignificant throughout the sample period and are not sensitive to the benchmark model. Evidence of timing and selecting abilities is obtained in a small number of funds which is not sensitive to the fees structure. We also find evidence that a majority of individual funds provide zero (very few provide positive) abnormal return before fees and a significantly negative return after fees. At the negative end of the tail of performance distribution, our resampling tests fail to reject the role of bad luck in the poor performance of funds and we conclude that most of them are merely ‘unlucky’.