934 resultados para Performance(engineering)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Performance-based maintenance contracts differ significantly from material and method-based contracts that have been traditionally used to maintain roads. Road agencies around the world have moved towards a performance-based contract approach because it offers several advantages like cost saving, better budgeting certainty, better customer satisfaction with better road services and conditions. Payments for the maintenance of road are explicitly linked to the contractor successfully meeting certain clearly defined minimum performance indicators in these contracts. Quantitative evaluation of the cost of performance-based contracts has several difficulties due to the complexity of the pavement deterioration process. Based on a probabilistic analysis of failures of achieving multiple performance criteria over the length of the contract period, an effort has been made to develop a model that is capable of estimating the cost of these performance-based contracts. One of the essential functions of such model is to predict performance of the pavement as accurately as possible. Prediction of future degradation of pavement is done using Markov Chain Process, which requires estimating transition probabilities from previous deterioration rate for similar pavements. Transition probabilities were derived using historical pavement condition rating data, both for predicting pavement deterioration when there is no maintenance, and for predicting pavement improvement when maintenance activities are performed. A methodological framework has been developed to estimate the cost of maintaining road based on multiple performance criteria such as crack, rut and, roughness. The application of the developed model has been demonstrated via a real case study of Miami Dade Expressways (MDX) using pavement condition rating data from Florida Department of Transportation (FDOT) for a typical performance-based asphalt pavement maintenance contract. Results indicated that the pavement performance model developed could predict the pavement deterioration quite accurately. Sensitivity analysis performed shows that the model is very responsive to even slight changes in pavement deterioration rate and performance constraints. It is expected that the use of this model will assist the highway agencies and contractors in arriving at a fair contract value for executing long term performance-based pavement maintenance works.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the increase in traffic on the internet, there is a greater demand for wireless mobile and ubiquitous applications. These applications need antennas that are not only broadband, but can also work in different frequency spectrums. Even though there is a greater demand for such applications, it is still imperative to conserve power. Thus, there is a need to design multi-broadband antennas that do not use a lot of power. Reconfigurable antennas can work in different frequency spectrums as well as conserve power. The current designs of reconfigurable antennas work only in one band. There is a need to design reconfigurable antennas that work in different frequency spectrums. In this current era of high power consumption there is also a greater demand for wireless powering. This dissertation explores ideal designs of reconfigurable antennas that can improve performance and enable wireless powering. This dissertation also presents lab results of the multi-broadband reconfigurable antenna that was created. A detailed mathematical analyses, as well as extensive simulation results are also presented. The novel reconfigurable antenna designs can be extended to Multiple Input Multiple Output (MIMO) environments and military applications.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parallel processing is prevalent in many manufacturing and service systems. Many manufactured products are built and assembled from several components fabricated in parallel lines. An example of this manufacturing system configuration is observed at a manufacturing facility equipped to assemble and test web servers. Characteristics of a typical web server assembly line are: multiple products, job circulation, and paralleling processing. The primary objective of this research was to develop analytical approximations to predict performance measures of manufacturing systems with job failures and parallel processing. The analytical formulations extend previous queueing models used in assembly manufacturing systems in that they can handle serial and different configurations of paralleling processing with multiple product classes, and job circulation due to random part failures. In addition, appropriate correction terms via regression analysis were added to the approximations in order to minimize the gap in the error between the analytical approximation and the simulation models. Markovian and general type manufacturing systems, with multiple product classes, job circulation due to failures, and fork and join systems to model parallel processing were studied. In the Markovian and general case, the approximations without correction terms performed quite well for one and two product problem instances. However, it was observed that the flow time error increased as the number of products and net traffic intensity increased. Therefore, correction terms for single and fork-join stations were developed via regression analysis to deal with more than two products. The numerical comparisons showed that the approximations perform remarkably well when the corrections factors were used in the approximations. In general, the average flow time error was reduced from 38.19% to 5.59% in the Markovian case, and from 26.39% to 7.23% in the general case. All the equations stated in the analytical formulations were implemented as a set of Matlab scripts. By using this set, operations managers of web server assembly lines, manufacturing or other service systems with similar characteristics can estimate different system performance measures, and make judicious decisions - especially setting delivery due dates, capacity planning, and bottleneck mitigation, among others.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research is based on the premises that teams can be designed to optimize its performance, and appropriate team coordination is a significant factor to team outcome performance. Contingency theory argues that the effectiveness of a team depends on the right fit of the team design factors to the particular job at hand. Therefore, organizations need computational tools capable of predict the performance of different configurations of teams. This research created an agent-based model of teams called the Team Coordination Model (TCM). The TCM estimates the coordination load and performance of a team, based on its composition, coordination mechanisms, and job’s structural characteristics. The TCM can be used to determine the team’s design characteristics that most likely lead the team to achieve optimal performance. The TCM is implemented as an agent-based discrete-event simulation application built using JAVA and Cybele Pro agent architecture. The model implements the effect of individual team design factors on team processes, but the resulting performance emerges from the behavior of the agents. These team member agents use decision making, and explicit and implicit mechanisms to coordinate the job. The model validation included the comparison of the TCM’s results with statistics from a real team and with the results predicted by the team performance literature. An illustrative 26-1 fractional factorial experimental design demonstrates the application of the simulation model to the design of a team. The results from the ANOVA analysis have been used to recommend the combination of levels of the experimental factors that optimize the completion time for a team that runs sailboats races. This research main contribution to the team modeling literature is a model capable of simulating teams working on complex job environments. The TCM implements a stochastic job structure model capable of capturing some of the complexity not capture by current models. In a stochastic job structure, the tasks required to complete the job change during the team execution of the job. This research proposed three new types of dependencies between tasks required to model a job as a stochastic structure. These dependencies are conditional sequential, single-conditional sequential, and the merge dependencies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The application of advanced materials in infrastructure has grown rapidly in recent years mainly because of their potential to ease the construction, extend the service life, and improve the performance of structures. Ultra-high performance concrete (UHPC) is one such material considered as a novel alternative to conventional concrete. The material microstructure in UHPC is optimized to significantly improve its material properties including compressive and tensile strength, modulus of elasticity, durability, and damage tolerance. Fiber-reinforced polymer (FRP) composite is another novel construction material with excellent properties such as high strength-to-weight and stiffness-to-weight ratios and good corrosion resistance. Considering the exceptional properties of UHPC and FRP, many advantages can result from the combined application of these two advanced materials, which is the subject of this research. The confinement behavior of UHPC was studied for the first time in this research. The stress-strain behavior of a series of UHPC-filled fiber-reinforced polymer (FRP) tubes with different fiber types and thicknesses were tested under uniaxial compression. The FRP confinement was shown to significantly enhance both the ultimate strength and strain of UHPC. It was also shown that existing confinement models are incapable of predicting the behavior of FRP-confined UHPC. Therefore, new stress-strain models for FRP-confined UHPC were developed through an analytical study. In the other part of this research, a novel steel-free UHPC-filled FRP tube (UHPCFFT) column system was developed and its cyclic behavior was studied. The proposed steel-free UHPCFFT column showed much higher strength and stiffness, with a reasonable ductility, as compared to its conventional reinforced concrete (RC) counterpart. Using the results of the first phase of column tests, a second series of UHPCFFT columns were made and studied under pseudo-static loading to study the effect of column parameters on the cyclic behavior of UHPCFFT columns. Strong correlations were noted between the initial stiffness and the stiffness index, and between the moment capacity and the reinforcement index. Finally, a thorough analytical study was carried out to investigate the seismic response of the proposed steel-free UHPCFFT columns, which showed their superior earthquake resistance, as compared to their RC counterparts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern data centers host hundreds of thousands of servers to achieve economies of scale. Such a huge number of servers create challenges for the data center network (DCN) to provide proportionally large bandwidth. In addition, the deployment of virtual machines (VMs) in data centers raises the requirements for efficient resource allocation and find-grained resource sharing. Further, the large number of servers and switches in the data center consume significant amounts of energy. Even though servers become more energy efficient with various energy saving techniques, DCN still accounts for 20% to 50% of the energy consumed by the entire data center. The objective of this dissertation is to enhance DCN performance as well as its energy efficiency by conducting optimizations on both host and network sides. First, as the DCN demands huge bisection bandwidth to interconnect all the servers, we propose a parallel packet switch (PPS) architecture that directly processes variable length packets without segmentation-and-reassembly (SAR). The proposed PPS achieves large bandwidth by combining switching capacities of multiple fabrics, and it further improves the switch throughput by avoiding padding bits in SAR. Second, since certain resource demands of the VM are bursty and demonstrate stochastic nature, to satisfy both deterministic and stochastic demands in VM placement, we propose the Max-Min Multidimensional Stochastic Bin Packing (M3SBP) algorithm. M3SBP calculates an equivalent deterministic value for the stochastic demands, and maximizes the minimum resource utilization ratio of each server. Third, to provide necessary traffic isolation for VMs that share the same physical network adapter, we propose the Flow-level Bandwidth Provisioning (FBP) algorithm. By reducing the flow scheduling problem to multiple stages of packet queuing problems, FBP guarantees the provisioned bandwidth and delay performance for each flow. Finally, while DCNs are typically provisioned with full bisection bandwidth, DCN traffic demonstrates fluctuating patterns, we propose a joint host-network optimization scheme to enhance the energy efficiency of DCNs during off-peak traffic hours. The proposed scheme utilizes a unified representation method that converts the VM placement problem to a routing problem and employs depth-first and best-fit search to find efficient paths for flows.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In - Appraising Work Group Performance: New Productivity Opportunities in Hospitality Management – a discussion by Mark R. Edwards, Associate Professor, College of Engineering, Arizona State University and Leslie Edwards Cummings, Assistant Professor, College of Hotel Administration University of Nevada, Las Vegas; the authors initially provide: “Employee group performance variation accounts for a significant portion of the degree of productivity in the hotel, motel, and food service sectors of the hospitality industry. The authors discuss TEAMSG, a microcomputer based approach to appraising and interpreting group performance. TEAMSG appraisal allows an organization to profile and to evaluate groups, facilitating the targeting of training and development decisions and interventions, as well as the more equitable distribution of organizational rewards.” “The caliber of employee group performance is a major determinant in an organization's productivity and success within the hotel and food service industries,” Edwards and Cummings say. “Gaining accurate information about the quality of performance of such groups as organizational divisions, individual functional departments, or work groups can be as enlightening...” the authors further reveal. This perspective is especially important not only for strategic human resources planning purposes, but also for diagnosing development needs and for differentially distributing organizational rewards.” The authors will have you know, employee requirements in an unpredictable environment, which is what the hospitality industry largely is, are difficult to quantify. In an effort to measure elements of performance Edwards and Cummings look to TEAMSG, which is an acronym for Team Evaluation and Management System for Groups. They develop the concept. In discussing background for employees, Edwards and Cummings point-out that employees - at the individual level - must often possess and exercise varied skills. In group circumstances employees often work at locations outside of, or move from corporate unit-to-unit, as in the case of a project team. Being able to transcend individual-to-group mentality is imperative. “A solution which addresses the frustration and lack of motivation on the part of the employee is to coach, develop, appraise, and reward employees on the basis of group achievement,” say the authors. “An appraisal, effectively developed and interpreted, has at least three functions,” Edwards and Cummings suggest, and go on to define them. The authors do place a great emphasis on rewards and interventions to bolster the assertion set forth in their thesis statement. Edwards and Cummings warn that individual agendas can threaten, erode, and undermine group performance; there is no - I - in TEAM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In 2010, the American Association of State Highway and Transportation Officials (AASHTO) released a safety analysis software system known as SafetyAnalyst. SafetyAnalyst implements the empirical Bayes (EB) method, which requires the use of Safety Performance Functions (SPFs). The system is equipped with a set of national default SPFs, and the software calibrates the default SPFs to represent the agency's safety performance. However, it is recommended that agencies generate agency-specific SPFs whenever possible. Many investigators support the view that the agency-specific SPFs represent the agency data better than the national default SPFs calibrated to agency data. Furthermore, it is believed that the crash trends in Florida are different from the states whose data were used to develop the national default SPFs. In this dissertation, Florida-specific SPFs were developed using the 2008 Roadway Characteristics Inventory (RCI) data and crash and traffic data from 2007-2010 for both total and fatal and injury (FI) crashes. The data were randomly divided into two sets, one for calibration (70% of the data) and another for validation (30% of the data). The negative binomial (NB) model was used to develop the Florida-specific SPFs for each of the subtypes of roadway segments, intersections and ramps, using the calibration data. Statistical goodness-of-fit tests were performed on the calibrated models, which were then validated using the validation data set. The results were compared in order to assess the transferability of the Florida-specific SPF models. The default SafetyAnalyst SPFs were calibrated to Florida data by adjusting the national default SPFs with local calibration factors. The performance of the Florida-specific SPFs and SafetyAnalyst default SPFs calibrated to Florida data were then compared using a number of methods, including visual plots and statistical goodness-of-fit tests. The plots of SPFs against the observed crash data were used to compare the prediction performance of the two models. Three goodness-of-fit tests, represented by the mean absolute deviance (MAD), the mean square prediction error (MSPE), and Freeman-Tukey R2 (R2FT), were also used for comparison in order to identify the better-fitting model. The results showed that Florida-specific SPFs yielded better prediction performance than the national default SPFs calibrated to Florida data. The performance of Florida-specific SPFs was further compared with that of the full SPFs, which include both traffic and geometric variables, in two major applications of SPFs, i.e., crash prediction and identification of high crash locations. The results showed that both SPF models yielded very similar performance in both applications. These empirical results support the use of the flow-only SPF models adopted in SafetyAnalyst, which require much less effort to develop compared to full SPFs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Compact thermal-fluid systems are found in many industries from aerospace to microelectronics where a combination of small size, light weight, and high surface area to volume ratio fluid networks are necessary. These devices are typically designed with fluid networks consisting of many small parallel channels that effectively pack a large amount of heat transfer surface area in a very small volume but do so at the cost of increased pumping power requirements. ^ To offset this cost the use of a branching fluid network for the distribution of coolant within a heat sink is investigated. The goal of the branch design technique is to minimize the entropy generation associated with the combination of viscous dissipation and convection heat transfer experienced by the coolant in the heat sink while maintaining compact high heat transfer surface area to volume ratios. ^ The derivation of Murray's Law, originally developed to predict the geometry of physiological transport systems, is extended to heat sink designs which minimze entropy generation. Two heat sink designs at different scales are built, and tested experimentally and analytically. The first uses this new derivation of Murray's Law. The second uses a combination of Murray's Law and Constructal Theory. The results of the experiments were used to verify the analytical and numerical models. These models were then used to compare the performance of the heat sink with other compact high performance heat sink designs. The results showed that the techniques used to design branching fluid networks significantly improves the performance of active heat sinks. The design experience gained was then used to develop a set of geometric relations which optimize the heat transfer to pumping power ratio of a single cooling channel element. Each element can be connected together using a set of derived geometric guidelines which govern branch diameters and angles. The methodology can be used to design branching fluid networks which can fit any geometry. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the progress of computer technology, computers are expected to be more intelligent in the interaction with humans, presenting information according to the user's psychological and physiological characteristics. However, computer users with visual problems may encounter difficulties on the perception of icons, menus, and other graphical information displayed on the screen, limiting the efficiency of their interaction with computers. In this dissertation, a personalized and dynamic image precompensation method was developed to improve the visual performance of the computer users with ocular aberrations. The precompensation was applied on the graphical targets before presenting them on the screen, aiming to counteract the visual blurring caused by the ocular aberration of the user's eye. A complete and systematic modeling approach to describe the retinal image formation of the computer user was presented, taking advantage of modeling tools, such as Zernike polynomials, wavefront aberration, Point Spread Function and Modulation Transfer Function. The ocular aberration of the computer user was originally measured by a wavefront aberrometer, as a reference for the precompensation model. The dynamic precompensation was generated based on the resized aberration, with the real-time pupil diameter monitored. The potential visual benefit of the dynamic precompensation method was explored through software simulation, with the aberration data from a real human subject. An "artificial eye'' experiment was conducted by simulating the human eye with a high-definition camera, providing objective evaluation to the image quality after precompensation. In addition, an empirical evaluation with 20 human participants was also designed and implemented, involving image recognition tests performed under a more realistic viewing environment of computer use. The statistical analysis results of the empirical experiment confirmed the effectiveness of the dynamic precompensation method, by showing significant improvement on the recognition accuracy. The merit and necessity of the dynamic precompensation were also substantiated by comparing it with the static precompensation. The visual benefit of the dynamic precompensation was further confirmed by the subjective assessments collected from the evaluation participants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Peripheral nerves have demonstrated the ability to bridge gaps of up to 6 mm. Peripheral Nerve System injury sites beyond this range need autograft or allograft surgery. Central Nerve System cells do not allow spontaneous regeneration due to the intrinsic environmental inhibition. Although stem cell therapy seems to be a promising approach towards nerve repair, it is essential to use the distinct three-dimensional architecture of a cell scaffold with proper biomolecule embedding in order to ensure that the local environment can be controlled well enough for growth and survival. Many approaches have been developed for the fabrication of 3D scaffolds, and more recently, fiber-based scaffolds produced via the electrospinning have been garnering increasing interest, as it offers the opportunity for control over fiber composition, as well as fiber mesh porosity using a relatively simple experimental setup. All these attributes make electrospun fibers a new class of promising scaffolds for neural tissue engineering. Therefore, the purpose of this doctoral study is to investigate the use of the novel material PGD and its derivative PGDF for obtaining fiber scaffolds using the electrospinning. The performance of these scaffolds, combined with neural lineage cells derived from ESCs, was evaluated by the dissolvability test, Raman spectroscopy, cell viability assay, real time PCR, Immunocytochemistry, extracellular electrophysiology, etc. The newly designed collector makes it possible to easily obtain fibers with adequate length and integrity. The utilization of a solvent like ethanol and water for electrospinning of fibrous scaffolds provides a potentially less toxic and more biocompatible fabrication method. Cell viability testing demonstrated that the addition of gelatin leads to significant improvement of cell proliferation on the scaffolds. Both real time PCR and Immunocytochemistry analysis indicated that motor neuron differentiation was achieved through the high motor neuron gene expression using the metabolites approach. The addition of Fumaric acid into fiber scaffolds further promoted the differentiation. Based on the results, this newly fabricated electrospun fiber scaffold, combined with neural lineage cells, provides a potential alternate strategy for nerve injury repair.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Heart valve disease occurs in adults as well as in pediatric population due to age-related changes, rheumatic fever, infection or congenital condition. Current treatment options are limited to mechanical heart valve (MHV) or bio-prosthetic heart valve (BHV) replacements. Lifelong anti-coagulant medication in case of MHV and calcification, durability in case of BHV are major setbacks for both treatments. Lack of somatic growth of these implants require multiple surgical interventions in case of pediatric patients. Advent of stem cell research and regenerative therapy propose an alternative and potential tissue engineered heart valves (TEHV) treatment approach to treat this life threatening condition. TEHV has the potential to promote tissue growth by replacing and regenerating a functional native valve. Hemodynamics play a crucial role in heart valve tissue formation and sustained performance. The focus of this study was to understand the role of physiological shear stress and flexure effects on de novo HV tissue formation as well as resulting gene and protein expression. A bioreactor system was used to generate physiological shear stress and cyclic flexure. Human bone marrow mesenchymal stem cell derived tissue constructs were exposed to native valve-like physiological condition. Responses of these tissue constructs to the valve-relevant stress states along with gene and protein expression were investigated after 22 days of tissue culture. We conclude that the combination of steady flow and cyclic flexure helps support engineered tissue formation by the co-existence of both OSS and appreciable shear stress magnitudes, and potentially augment valvular gene and protein expression when both parameters are in the physiological range.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research aimed at developing a research framework for the emerging field of enterprise systems engineering (ESE). The framework consists of an ESE definition, an ESE classification scheme, and an ESE process. This study views an enterprise as a system that creates value for its customers. Thus, developing the framework made use of system theory and IDEF methodologies. This study defined ESE as an engineering discipline that develops and applies systems theory and engineering techniques to specification, analysis, design, and implementation of an enterprise for its life cycle. The proposed ESE classification scheme breaks down an enterprise system into four elements. They are work, resources, decision, and information. Each enterprise element is specified with four system facets: strategy, competency, capacity, and structure. Each element-facet combination is subject to the engineering process of specification, analysis, design, and implementation, to achieve its pre-specified performance with respect to cost, time, quality, and benefit to the enterprise. This framework is intended for identifying research voids in the ESE discipline. It also helps to apply engineering and systems tools to this emerging field. It harnesses the relationships among various enterprise aspects and bridges the gap between engineering and management practices in an enterprise. The proposed ESE process is generic. It consists of a hierarchy of engineering activities presented in an IDEF0 model. Each activity is defined with its input, output, constraints, and mechanisms. The output of an ESE effort can be a partial or whole enterprise system design for its physical, managerial, and/or informational layers. The proposed ESE process is applicable to a new enterprise system design or an engineering change in an existing system. The long-term goal of this study aims at development of a scientific foundation for ESE research and development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To achieve the goal of sustainable development, the building energy system was evaluated from both the first and second law of thermodynamics point of view. The relationship between exergy destruction and sustainable development were discussed at first, followed by the description of the resource abundance model, the life cycle analysis model and the economic investment effectiveness model. By combining the forgoing models, a new sustainable index was proposed. Several green building case studies in U.S. and China were presented. The influences of building function, geographic location, climate pattern, the regional energy structure, and the technology improvement potential of renewable energy in the future were discussed. The building’s envelope, HVAC system, on-site renewable energy system life cycle analysis from energy, exergy, environmental and economic perspective were compared. It was found that climate pattern had a dramatic influence on the life cycle investment effectiveness of the building envelope. The building HVAC system energy performance was much better than its exergy performance. To further increase the exergy efficiency, renewable energy rather than fossil fuel should be used as the primary energy. A building life cycle cost and exergy consumption regression model was set up. The optimal building insulation level could be affected by either cost minimization or exergy consumption minimization approach. The exergy approach would cause better insulation than cost approach. The influence of energy price on the system selection strategy was discussed. Two photovoltaics (PV) systems – stand alone and grid tied system were compared by the life cycle assessment method. The superiority of the latter one was quite obvious. The analysis also showed that during its life span PV technology was less attractive economically because the electricity price in U.S. and China did not fully reflect the environmental burden associated with it. However if future energy price surges and PV system cost reductions were considered, the technology could be very promising for sustainable buildings in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to examine the factorsbehind the failure rates of Associate in Arts (AA)graduates from Miami-Dade Community College (M-DCC) transferring to the Florida State University System (SUS). In M-DCC's largest disciplines, the university failure rate was 13% for Business & Management, 13% for Computer Science, and 14% for Engineering. Hypotheses tested were: Hypothesis 1 (H1): The lower division (LD) overall cumulative GPA and/or the LD major field GPA for AA graduates are predictive of the SUS GPA for the Business Management, Computer Science, and Engineering disciplines. Hypothesis 2 (H2): Demographic variables (age, race, gender) are predictive of performance at the university among M-DCC AA graduates in Engineering, Business & Management, and Computer Science. Hypothesis 3 (H3): Administrative variables (CLAST -College Level Academic Skills Test subtests) are predictive of university performance (GPA) for the Business/Management, Engineering, and Computer Science disciplines. Hypothesis 4 (H4): LD curriculum variables (course credits, course quality points) are predictive of SUS performance for the Engineering, Business/Management and Computer Science disciplines. Multiple Regression was the inferential procedureselected for predictions. Descriptive statistics weregenerated on the predictors. Results for H1 identified the LD GPA as the most significant variable in accounting for the variability of the university GPA for the Business & Management, Computer Science, and Engineering disciplines. For H2, no significant results were obtained for theage and gender variables, but the ethnic subgroups indicated significance at the .0001 level. However, differentials in GPA may not have been due directly to the race factor but, rather, to curriculum choices and performance outcomes while in the LD. The CLAST computation variable (H3) was a significant predictor of the SUS GPA. This is most likely due to the mathematics structure pervasive in these disciplines. For H4, there were two curriculum variables significant in explaining the variability of the university GPA (number of required critical major credits completed and quality of the student's performance for these credits). Descriptive statistics on the predictors indicated that 78% of those failing in the State University System had a LD major GPA (calculated with the critical required university credits earned and quality points of these credits) of less than 3.0; and 83% of those failing at the university had an overall community college GPA of less than 3.0.