58 resultados para performance-based engineering
Resumo:
Agency costs are said to arise as a result of the separation of ownership from control inherent in the corporate form of ownership. One such agency problem concerns the potential variance between the time horizons of principal shareholders and agent managers. Agency theory suggests that these costs can be alleviated or controlled through performance-based Chief Executive Officer (CEO) contracting. However, components of a CEO's compensation contract can exacerbate or mitigate agency-related problems (Antle and Smith, 1985). According to the horizon hypothesis, a self-serving CEO reduces discretionary research and development (R&D) expenditures to increase earnings and earnings-based bonus compensation. Agency theorists contend that a CEO's market-based compensation component can mitigate horizon problems. This study seeks to determine whether there is a relationship between CEO earnings- and market-based compensation components and R&D expenditures in the largest United States industrial firms from 1987 to 1993.^ Consistent with the horizon hypothesis, results provide evidence of a negative and statistically significant relationship between CEO cash compensation (i.e., salary and bonus) and the firm's R&D expenditures. Consistent with the expectations of agency theory, results provide evidence of a positive and statistically significant relationship between market-based CEO compensation and R&D.^ Further results of this study provide evidence of a positive and statistically significant relationship between CEO tenure and the firm's R&D expenditures. Although there is a negative relationship between CEO age and the firm's R&D, it was not statistically significant at the 0.5 level. ^
Resumo:
The purpose of this study was to document and critically analyze the lived experience of selected nursing staff developers in the process of moving toward a new model for hospital nursing education. Eleven respondents were drawn from a nation-wide population of about two hundred individuals involved in nursing staff development. These subjects were responsible for the implementation of the Performance Based Development System (PBDS) in their institutions.^ A purposive, criterion-based sampling technique was used with respondents being selected according to size of hospital, primary responsibility for orchestration of the change, influence over budgetary factors and managerial responsibility for PBDS. Data were gathered by the researcher through both in-person and telephone interviews. A semi-structured interview guide, designed by the researcher was used, and respondents were encouraged to amplify on their recollections as desired. Audiotapes were transcribed and resulting computer files were analyzed using the program "Martin". Answers to interview questions were compiled and reported across cases. The data was then reviewed a second time and interpreted for emerging themes and patterns.^ Two types of verification were used in the study. Internal verification was done through interview transcript review and feedback by respondents. External verification was done through review and feedback on data analysis by readers who were experienced in management of staff development departments.^ All respondents were female, so Gilligan's concept of the "ethic of care" was examined as a decision making strategy. Three levels of caring which influenced decision making were found. They were caring: (a) for the organization, (b) for the employee, and (c) for the patient. The four existentials of the lived experience, relationality, corporeality, temporality and spatiality were also examined to reveal the everydayness of making change. ^
Resumo:
The most important factor that affects the decision making process in finance is the risk which is usually measured by variance (total risk) or systematic risk (beta). Since investors’ sentiment (whether she is an optimist or pessimist) plays a very important role in the choice of beta measure, any decision made for the same asset within the same time horizon will be different for different individuals. In other words, there will neither be homogeneity of beliefs nor the rational expectation prevalent in the market due to behavioral traits. This dissertation consists of three essays. In the first essay, “ Investor Sentiment and Intrinsic Stock Prices”, a new technical trading strategy was developed using a firm specific individual sentiment measure. This behavioral based trading strategy forecasts a range within which a stock price moves in a particular period and can be used for stock trading. Results indicate that sample firms trade within a range and give signals as to when to buy or sell. In the second essay, “Managerial Sentiment and the Value of the Firm”, examined the effect of managerial sentiment on the project selection process using net present value criterion and also effect of managerial sentiment on the value of firm. Final analysis reported that high sentiment and low sentiment managers obtain different values for the same firm before and after the acceptance of a project. Changes in the cost of capital, weighted cost of average capital were found due to managerial sentiment. In the last essay, “Investor Sentiment and Optimal Portfolio Selection”, analyzed how the investor sentiment affects the nature and composition of the optimal portfolio as well as the portfolio performance. Results suggested that the choice of the investor sentiment completely changes the portfolio composition, i.e., the high sentiment investor will have a completely different choice of assets in the portfolio in comparison with the low sentiment investor. The results indicated the practical application of behavioral model based technical indicator for stock trading. Additional insights developed include the valuation of firms with a behavioral component and the importance of distinguishing portfolio performance based on sentiment factors.
Resumo:
The most important factor that affects the decision making process in finance is the risk which is usually measured by variance (total risk) or systematic risk (beta). Since investors' sentiment (whether she is an optimist or pessimist) plays a very important role in the choice of beta measure, any decision made for the same asset within the same time horizon will be different for different individuals. In other words, there will neither be homogeneity of beliefs nor the rational expectation prevalent in the market due to behavioral traits. This dissertation consists of three essays. In the first essay, Investor Sentiment and Intrinsic Stock Prices, a new technical trading strategy is developed using a firm specific individual sentiment measure. This behavioral based trading strategy forecasts a range within which a stock price moves in a particular period and can be used for stock trading. Results show that sample firms trade within a range and show signals as to when to buy or sell. The second essay, Managerial Sentiment and the Value of the Firm, examines the effect of managerial sentiment on the project selection process using net present value criterion and also effect of managerial sentiment on the value of firm. Findings show that high sentiment and low sentiment managers obtain different values for the same firm before and after the acceptance of a project. The last essay, Investor Sentiment and Optimal Portfolio Selection, analyzes how the investor sentiment affects the nature and composition of the optimal portfolio as well as the performance measures. Results suggest that the choice of the investor sentiment completely changes the portfolio composition, i.e., the high sentiment investor will have a completely different choice of assets in the portfolio in comparison with the low sentiment investor. The results indicate the practical application of behavioral model based technical indicators for stock trading. Additional insights developed include the valuation of firms with a behavioral component and the importance of distinguishing portfolio performance based on sentiment factors.
Resumo:
This study took place at one of the intercultural universities (IUs) of Mexico that serve primarily indigenous students. The IUs are pioneers in higher education despite their numerous challenges (Bertely, 1998; Dietz, 2008; Pineda & Landorf, 2010; Schmelkes, 2009). To overcome educational inequalities among their students (Ahuja, Berumen, Casillas, Crispín, Delgado et al., 2004; Schmelkes, 2009), the IUs have embraced performance-based assessment (PBA; Casillas & Santini, 2006). PBA allows a shared model of power and control related to learning and evaluation (Anderson, 1998). While conducting a review on PBA strategies of the IUs, the researcher did not find a PBA instrument with valid and reliable estimates. The purpose of this study was to develop a process to create a PBA instrument, an analytic general rubric, with acceptable validity and reliability estimates to assess students' attainment of competencies in one of the IU's majors, Intercultural Development Management. The Human Capabilities Approach (HCA) was the theoretical framework and a sequential mixed method (Creswell, 2003; Teddlie & Tashakkori, 2009) was the research design. IU participants created a rubric during two focus groups, and seven Spanish-speaking professors in Mexico and the US piloted using students' research projects. The evidence that demonstrates the attainment of competencies at the IU is a complex set of actual, potential and/or desired performances or achievements, also conceptualized as "functional capabilities" (FCs; Walker, 2008), that can be used to develop a rubric. Results indicate that the rubric's validity and reliability estimates reached acceptable estimates of 80% agreement, surpassing minimum requirements (Newman, Newman, & Newman, 2011). Implications for practice involve the use of PBA within a formative assessment framework, and dynamic inclusion of constituencies. Recommendations for further research include introducing this study's instrument-development process to other IUs, conducting parallel mixed design studies exploring the intersection between HCA and assessment, and conducting a case study exploring assessment in intercultural settings. Education articulated through the HCA empowers students (Unterhalter & Brighouse, 2007; Walker, 2008). This study aimed to contribute to the quality of student learning assessment at the IUs by providing a participatory process to develop a PBA instrument.
Resumo:
This study took place at one of the intercultural universities (IUs) of Mexico that serve primarily indigenous students. The IUs are pioneers in higher education despite their numerous challenges (Bertely, 1998; Dietz, 2008; Pineda & Landorf, 2010; Schmelkes, 2009). To overcome educational inequalities among their students (Ahuja, Berumen, Casillas, Crispín, Delgado et al., 2004; Schmelkes, 2009), the IUs have embraced performance-based assessment (PBA; Casillas & Santini, 2006). PBA allows a shared model of power and control related to learning and evaluation (Anderson, 1998). While conducting a review on PBA strategies of the IUs, the researcher did not find a PBA instrument with valid and reliable estimates. The purpose of this study was to develop a process to create a PBA instrument, an analytic general rubric, with acceptable validity and reliability estimates to assess students’ attainment of competencies in one of the IU’s majors, Intercultural Development Management. The Human Capabilities Approach (HCA) was the theoretical framework and a sequential mixed method (Creswell, 2003; Teddlie & Tashakkori, 2009) was the research design. IU participants created a rubric during two focus groups, and seven Spanish-speaking professors in Mexico and the US piloted using students’ research projects. The evidence that demonstrates the attainment of competencies at the IU is a complex set of actual, potential and/or desired performances or achievements, also conceptualized as “functional capabilities” (FCs; Walker, 2008), that can be used to develop a rubric. Results indicate that the rubric’s validity and reliability estimates reached acceptable estimates of 80% agreement, surpassing minimum requirements (Newman, Newman, & Newman, 2011). Implications for practice involve the use of PBA within a formative assessment framework, and dynamic inclusion of constituencies. Recommendations for further research include introducing this study’s instrument-development process to other IUs, conducting parallel mixed design studies exploring the intersection between HCA and assessment, and conducting a case study exploring assessment in intercultural settings. Education articulated through the HCA empowers students (Unterhalter & Brighouse, 2007; Walker, 2008). This study aimed to contribute to the quality of student learning assessment at the IUs by providing a participatory process to develop a PBA instrument.
Resumo:
This research is based on the premises that teams can be designed to optimize its performance, and appropriate team coordination is a significant factor to team outcome performance. Contingency theory argues that the effectiveness of a team depends on the right fit of the team design factors to the particular job at hand. Therefore, organizations need computational tools capable of predict the performance of different configurations of teams. This research created an agent-based model of teams called the Team Coordination Model (TCM). The TCM estimates the coordination load and performance of a team, based on its composition, coordination mechanisms, and job’s structural characteristics. The TCM can be used to determine the team’s design characteristics that most likely lead the team to achieve optimal performance. The TCM is implemented as an agent-based discrete-event simulation application built using JAVA and Cybele Pro agent architecture. The model implements the effect of individual team design factors on team processes, but the resulting performance emerges from the behavior of the agents. These team member agents use decision making, and explicit and implicit mechanisms to coordinate the job. The model validation included the comparison of the TCM’s results with statistics from a real team and with the results predicted by the team performance literature. An illustrative 26-1 fractional factorial experimental design demonstrates the application of the simulation model to the design of a team. The results from the ANOVA analysis have been used to recommend the combination of levels of the experimental factors that optimize the completion time for a team that runs sailboats races. This research main contribution to the team modeling literature is a model capable of simulating teams working on complex job environments. The TCM implements a stochastic job structure model capable of capturing some of the complexity not capture by current models. In a stochastic job structure, the tasks required to complete the job change during the team execution of the job. This research proposed three new types of dependencies between tasks required to model a job as a stochastic structure. These dependencies are conditional sequential, single-conditional sequential, and the merge dependencies.
Resumo:
Parameter design is an experimental design and analysis methodology for developing robust processes and products. Robustness implies insensitivity to noise disturbances. Subtle experimental realities, such as the joint effect of process knowledge and analysis methodology, may affect the effectiveness of parameter design in precision engineering; where the objective is to detect minute variation in product and process performance. In this thesis, approaches to statistical forced-noise design and analysis methodologies were investigated with respect to detecting performance variations. Given a low degree of process knowledge, Taguchi's methodology of signal-to-noise ratio analysis was found to be more suitable in detecting minute performance variations than the classical approach based on polynomial decomposition. Comparison of inner-array noise (IAN) and outer-array noise (OAN) structuring approaches showed that OAN is a more efficient design for precision engineering. ^
Resumo:
Pavement performance is one of the most important components of the pavement management system. Prediction of the future performance of a pavement section is important in programming maintenance and rehabilitation needs. Models for predicting pavement performance have been developed on the basis of traffic and age. The purpose of this research is to extend the use of a relatively new approach to performance prediction in pavement performance modeling using adaptive logic networks (ALN). Adaptive logic networks have recently emerged as an effective alternative to artificial neural networks for machine learning tasks. ^ The ALN predictive methodology is applicable to a wide variety of contexts including prediction of roughness based indices, composite rating indices and/or individual pavement distresses. The ALN program requires key information about a pavement section, including the current distress indexes, pavement age, climate region, traffic and other variables to predict yearly performance values into the future. ^ This research investigates the effect of different learning rates of the ALN in pavement performance modeling. It can be used at both the network and project level for predicting the long term performance of a road network. Results indicate that the ALN approach is well suited for pavement performance prediction modeling and shows a significant improvement over the results obtained from other artificial intelligence approaches. ^
Resumo:
Highways are generally designed to serve a mixed traffic flow that consists of passenger cars, trucks, buses, recreational vehicles, etc. The fact that the impacts of these different vehicle types are not uniform creates problems in highway operations and safety. A common approach to reducing the impacts of truck traffic on freeways has been to restrict trucks to certain lane(s) to minimize the interaction between trucks and other vehicles and to compensate for their differences in operational characteristics. ^ The performance of different truck lane restriction alternatives differs under different traffic and geometric conditions. Thus, a good estimate of the operational performance of different truck lane restriction alternatives under prevailing conditions is needed to help make informed decisions on truck lane restriction alternatives. This study develops operational performance models that can be applied to help identify the most operationally efficient truck lane restriction alternative on a freeway under prevailing conditions. The operational performance measures examined in this study include average speed, throughput, speed difference, and lane changes. Prevailing conditions include number of lanes, interchange density, free-flow speeds, volumes, truck percentages, and ramp volumes. ^ Recognizing the difficulty of collecting sufficient data for an empirical modeling procedure that involves a high number of variables, the simulation approach was used to estimate the performance values for various truck lane restriction alternatives under various scenarios. Both the CORSIM and VISSIM simulation models were examined for their ability to model truck lane restrictions. Due to a major problem found in the CORSIM model for truck lane modeling, the VISSIM model was adopted as the simulator for this study. ^ The VISSIM model was calibrated mainly to replicate the capacity given in the 2000 Highway Capacity Manual (HCM) for various free-flow speeds under the ideal basic freeway section conditions. Non-linear regression models for average speed, throughput, average number of lane changes, and speed difference between the lane groups were developed. Based on the performance models developed, a simple decision procedure was recommended to select the desired truck lane restriction alternative for prevailing conditions. ^
Resumo:
The purpose of this study was to design a preventive scheme using directional antennas to improve the performance of mobile ad hoc networks. In this dissertation, a novel Directionality based Preventive Link Maintenance (DPLM) Scheme is proposed to characterize the performance gain [JaY06a, JaY06b, JCY06] by extending the life of link. In order to maintain the link and take preventive action, signal strength of data packets is measured. Moreover, location information or angle of arrival information is collected during communication and saved in the table. When measured signal strength is below orientation threshold , an orientation warning is generated towards the previous hop node. Once orientation warning is received by previous hop (adjacent) node, it verifies the correctness of orientation warning with few hello pings and initiates high quality directional link (a link above the threshold) and immediately switches to it, avoiding a link break altogether. The location information is utilized to create a directional link by orienting neighboring nodes antennas towards each other. We call this operation an orientation handoff, which is similar to soft-handoff in cellular networks. ^ Signal strength is the indicating factor, which represents the health of the link and helps to predict the link failure. In other words, link breakage happens due to node movement and subsequently reducing signal strength of receiving packets. DPLM scheme helps ad hoc networks to avoid or postpone costly operation of route rediscovery in on-demand routing protocols by taking above-mentioned preventive action. ^ This dissertation advocates close but simple collaboration between the routing, medium access control and physical layers. In order to extend the link, the Dynamic Source Routing (DSR) and IEEE 802.11 MAC protocols were modified to use the ability of directional antennas to transmit over longer distance. A directional antenna module is implemented in OPNET simulator with two separate modes of operations: omnidirectional and directional. The antenna module has been incorporated in wireless node model and simulations are performed to characterize the performance improvement of mobile ad hoc networks. Extensive simulations have shown that without affecting the behavior of the routing protocol noticeably, aggregate throughput, packet delivery ratio, end-to-end delay (latency), routing overhead, number of data packets dropped, and number of path breaks are improved considerably. We have done the analysis of the results in different scenarios to evaluate that the use of directional antennas with proposed DPLM scheme has been found promising to improve the performance of mobile ad hoc networks. ^
Resumo:
Crash reduction factors (CRFs) are used to estimate the potential number of traffic crashes expected to be prevented from investment in safety improvement projects. The method used to develop CRFs in Florida has been based on the commonly used before-and-after approach. This approach suffers from a widely recognized problem known as regression-to-the-mean (RTM). The Empirical Bayes (EB) method has been introduced as a means to addressing the RTM problem. This method requires the information from both the treatment and reference sites in order to predict the expected number of crashes had the safety improvement projects at the treatment sites not been implemented. The information from the reference sites is estimated from a safety performance function (SPF), which is a mathematical relationship that links crashes to traffic exposure. The objective of this dissertation was to develop the SPFs for different functional classes of the Florida State Highway System. Crash data from years 2001 through 2003 along with traffic and geometric data were used in the SPF model development. SPFs for both rural and urban roadway categories were developed. The modeling data used were based on one-mile segments that contain homogeneous traffic and geometric conditions within each segment. Segments involving intersections were excluded. The scatter plots of data show that the relationships between crashes and traffic exposure are nonlinear, that crashes increase with traffic exposure in an increasing rate. Four regression models, namely, Poisson (PRM), Negative Binomial (NBRM), zero-inflated Poisson (ZIP), and zero-inflated Negative Binomial (ZINB), were fitted to the one-mile segment records for individual roadway categories. The best model was selected for each category based on a combination of the Likelihood Ratio test, the Vuong statistical test, and the Akaike's Information Criterion (AIC). The NBRM model was found to be appropriate for only one category and the ZINB model was found to be more appropriate for six other categories. The overall results show that the Negative Binomial distribution model generally provides a better fit for the data than the Poisson distribution model. In addition, the ZINB model was found to give the best fit when the count data exhibit excess zeros and over-dispersion for most of the roadway categories. While model validation shows that most data points fall within the 95% prediction intervals of the models developed, the Pearson goodness-of-fit measure does not show statistical significance. This is expected as traffic volume is only one of the many factors contributing to the overall crash experience, and that the SPFs are to be applied in conjunction with Accident Modification Factors (AMFs) to further account for the safety impacts of major geometric features before arriving at the final crash prediction. However, with improved traffic and crash data quality, the crash prediction power of SPF models may be further improved.
Resumo:
This qualitative case study explored how employees learn from Team Primacy Concept (TPC)-based employee evaluation and how they apply the knowledge in their job performance. Kolb's experiential learning model (1974) served as a conceptual framework for the study to reveal the process of how employees learn from TPC evaluation, namely, how they experience, reflect, conceptualize and act on performance feedback. TPC based evaluation is a form of multirater evaluation that consists of three components: self-feedback, supervisor's feedback, and peer feedback. The distinctive characteristic of TPC based evaluation is the team evaluation component during which the employee's professional performance is discussed by one's peers in a face-to-face team setting, while other forms of multirater evaluation are usually conducted in a confidential and anonymous manner.^ Case study formed the methodological framework. The case was the Southeastern Virginia (SEVA) region of the Institute for Family Centered Services, and the participants were eight employees of the SEVA region. Findings showed that the evaluation process was anxiety producing for employees, especially the process of peer evaluation in a team setting. Preparation was found to be an important phase of TPC evaluation. Overall, the positive feedback delivered in a team setting made team members feel acknowledged. The study participants felt that honesty in providing feedback and openness to hearing challenges were significant prerequisites to the TPC evaluation process. Further, in the planning phase, employees strove to develop goals for themselves that were meaningful. Also, the catalyst for feedback implementation appeared to stem from one's accountability to self and to the client or community. Generally, the participants identified a number of performance improvement goals that they attained during their employment with IFCS, which were supported by their developmental plans.^ In conclusion, the study identified the process by which employees learned from TPC-based employee evaluation and the ways in which they used the knowledge to improve their job performance. Specifically, the study examined how participants felt and what they thought about TPC-based feedback, in what ways they reflected and made meaning of the feedback, and how they used the feedback to improve their job performance.^
Resumo:
Catastrophic failure from intentional terrorist attacks on surface transportation infrastructure could he detrimental to the society. In order to minimize the vulnerabilities and to ensure a safe transportation system, the issue of security for transportation structures, primarily bridges, which are subjected to man-made hazards is investigated in this study. A procedure for identifying and prioritizing "critical bridges" using a screening and prioritization processes is established. For each of the "critical" bridges, a systematic risk-based assessment approach is proposed that takes into account the combination of threat occurrence likelihood, its consequences, and the socioeconomic importance of the bridge. A series of effective security countermeasures are compiled in the four categories of deterrence, detection, defense and mitigation to help reduce the vulnerability of critical bridges. The concepts of simplified equivalent I-shape cross section and virtual materials are proposed for integration into a nonlinear finite element model, which helps assess the performance of reinforced concrete structures with and without composite retrofit or hardening measures under blast loading. A series of parametric studies are conducted for single column and two-column pier frame systems as well as for an entire bridge. The parameters considered include column height, column type, concrete strength, longitudinal steel reinforcement ratio, thickness, fiber angle and tensile strength of the fiber reinforced polymer (FRP) tube, shape of the cross section, damping ratio and different bomb sizes. The study shows the benefits of hardening with composites against blast loading. The effect of steel reinforcement on blast resistance of the structure is more significant than the effect of concrete compressive strength. Moreover, multiple blasts do not necessarily lead to a more severe destruction than a single detonation at a strategically vulnerable location on the bridges.
Resumo:
Traffic incidents are non-recurring events that can cause a temporary reduction in roadway capacity. They have been recognized as a major contributor to traffic congestion on our nation’s highway systems. To alleviate their impacts on capacity, automatic incident detection (AID) has been applied as an incident management strategy to reduce the total incident duration. AID relies on an algorithm to identify the occurrence of incidents by analyzing real-time traffic data collected from surveillance detectors. Significant research has been performed to develop AID algorithms for incident detection on freeways; however, similar research on major arterial streets remains largely at the initial stage of development and testing. This dissertation research aims to identify design strategies for the deployment of an Artificial Neural Network (ANN) based AID algorithm for major arterial streets. A section of the US-1 corridor in Miami-Dade County, Florida was coded in the CORSIM microscopic simulation model to generate data for both model calibration and validation. To better capture the relationship between the traffic data and the corresponding incident status, Discrete Wavelet Transform (DWT) and data normalization were applied to the simulated data. Multiple ANN models were then developed for different detector configurations, historical data usage, and the selection of traffic flow parameters. To assess the performance of different design alternatives, the model outputs were compared based on both detection rate (DR) and false alarm rate (FAR). The results show that the best models were able to achieve a high DR of between 90% and 95%, a mean time to detect (MTTD) of 55-85 seconds, and a FAR below 4%. The results also show that a detector configuration including only the mid-block and upstream detectors performs almost as well as one that also includes a downstream detector. In addition, DWT was found to be able to improve model performance, and the use of historical data from previous time cycles improved the detection rate. Speed was found to have the most significant impact on the detection rate, while volume was found to contribute the least. The results from this research provide useful insights on the design of AID for arterial street applications.