24 resultados para Solving-problem algorithms
em Digital Commons at Florida International University
Resumo:
Although the literature on the types of abilities and processes that contribute to identity formation has been growing, the research has been mainly descriptive/correlational. This dissertation conducted an experimental investigation of the role of two theoretically distinct processes (exploration and critical problem solving) in identity formation, one of the first to be reported. The experimental training design (pre-post, training versus control) used in this study was intended to promote identity development by fostering an increase in the use of exploration and critical problem solving with respect to making life choices. Participants included 53 psychology students from a large urban university randomly assigned to each group. The most theoretically significant finding was that the intervention was successful in inducing change in the ability to use critical skills in resolving life decisions, as well as effecting a positive change in identity status. ^
Resumo:
The field of chemical kinetics is an exciting and active field. The prevailing theories make a number of simplifying assumptions that do not always hold in actual cases. Another current problem concerns a development of efficient numerical algorithms for solving the master equations that arise in the description of complex reactions. The objective of the present work is to furnish a completely general and exact theory of reaction rates, in a form reminiscent of transition state theory, valid for all fluid phases and also to develop a computer program that can solve complex reactions by finding the concentrations of all participating substances as a function of time. To do so, the full quantum scattering theory is used for deriving the exact rate law, and then the resulting cumulative reaction probability is put into several equivalent forms that take into account all relativistic effects if applicable, including one that is strongly reminiscent of transition state theory, but includes corrections from scattering theory. Then two programs, one for solving complex reactions, the other for solving first order linear kinetic master equations to solve them, have been developed and tested for simple applications.
Resumo:
The span of control is the most discussed single concept in classical and modern management theory. In specifying conditions for organizational effectiveness, the span of control has generally been regarded as a critical factor. Existing research work has focused mainly on qualitative methods to analyze this concept, for example heuristic rules based on experiences and/or intuition. This research takes a quantitative approach to this problem and formulates it as a binary integer model, which is used as a tool to study the organizational design issue. This model considers a range of requirements affecting management and supervision of a given set of jobs in a company. These decision variables include allocation of jobs to workers, considering complexity and compatibility of each job with respect to workers, and the requirement of management for planning, execution, training, and control activities in a hierarchical organization. The objective of the model is minimal operations cost, which is the sum of supervision costs at each level of the hierarchy, and the costs of workers assigned to jobs. The model is intended for application in the make-to-order industries as a design tool. It could also be applied to make-to-stock companies as an evaluation tool, to assess the optimality of their current organizational structure. Extensive experiments were conducted to validate the model, to study its behavior, and to evaluate the impact of changing parameters with practical problems. This research proposes a meta-heuristic approach to solving large-size problems, based on the concept of greedy algorithms and the Meta-RaPS algorithm. The proposed heuristic was evaluated with two measures of performance: solution quality and computational speed. The quality is assessed by comparing the obtained objective function value to the one achieved by the optimal solution. The computational efficiency is assessed by comparing the computer time used by the proposed heuristic to the time taken by a commercial software system. Test results show the proposed heuristic procedure generates good solutions in a time-efficient manner.
Resumo:
This study determined the levels of algebra problem solving skill at which worked examples promoted learning of further problem solving skill and reduction of cognitive load in college developmental algebra students. Problem solving skill was objectively measured as error production; cognitive load was subjectively measured as perceived mental effort. ^ Sixty-three Ss were pretested, received homework of worked examples or mass problem solving, and posttested. Univarate ANCOVA (covariate = previous grade) were performed on the practice and posttest data. The factors used in the analysis were practice strategy (worked examples vs. mass problem solving) and algebra problem solving skill (low vs. moderate vs. high). Students in the practice phase who studied worked examples exhibited (a) fewer errors and reduced cognitive load, at moderate skill; (b) neither fewer errors nor reduced cognitive load, at low skill; and (c) only reduced cognitive load, at high skill. In the posttest, only cognitive load was reduced. ^ The results suggested that worked examples be emphasized for developmental students with moderate problem solving skill. Areas for further research were discussed. ^
Resumo:
This thesis extended previous research on critical decision making and problem solving by refining and validating a measure designed to assess the use of critical thinking and critical discussion in sociomoral dilemmas. The purpose of this thesis was twofold: 1) to refine the administration of the Critical Thinking Subscale of the CDP to elicit more adequate responses and for purposes of refining the coding and scoring procedures for the total measure, and 2) to collect preliminary data on the initial reliabilities of the measure. Subjects consisted of 40 undergraduate students at Florida International University. Results indicate that the use of longer probes on the Critical Thinking Subscale was more effective in eliciting adequate responses necessary for coding and evaluating the subjects performance. Analyses on the psychometric properties of the measure consisted of test-retest reliability and inter-rater reliability.
Resumo:
Although the literature on the types of abilities and processes that contribute to identity formation has been growing, the research has been mainly descriptive/correlational. This dissertation conducted an experimental investigation of the role of two theoretically distinct processes (exploration and critical problem solving) in identity formation, one of the first to be reported. The experimental training design (pre-post, training versus control) used in this study was intended to promote identity development by fostering an increase in the use of exploration and critical problem solving with respect to making life choices. Participants included 53 psychology students from a large urban university randomly assigned to each group. The most theoretically significant finding was that the intervention was successful in inducing change in the ability to use critical skills in resolving life decisions, as well as effecting a positive change in identity status.
Resumo:
This thesis extends previous research on critical decision making and problem-solving by refining and validating a self-report measure designed to assess the use of critical decision making and problem solving in making life choices. The analysis was conducted by performing two studies, and therefore collecting two sets of data on the psychometric properties of the measure. Psychometric analyses included: item analysis, internal consistency reliability, interrater reliability, and an exploratory factor analysis. This study also included regression analysis with the Wonderlic, an established measure of general intelligence, to provide preliminary evidence for the construct validity of the measure.
Resumo:
This study examined the influence of age, expertise, and task difficulty on children's patterns of collaboration. Six- and eight-year-old children were individually pretested for ability to copy a Lego model and then paired with each other and asked to copy two more models. The design was a 3 (dyad skill level: novice, expert, or mixed) X 2 (age: six or eight) X 2 (task difficulty: moderate or complex) factorial. Results indicated that cooperation increased with age and expertise and decreased with task difficulty. However, expertise had a greater influence on younger than older children's interaction styles. It is argued that with age, social skills may become as important as expertise in determining styles of collaboration. The issue is raised of whether cooperation, domination, and independence represent developmental sequences (i.e., independence precedes cooperation) or whether they represent personal styles of interaction. Finally, it is suggested that an important goal for future research is to assess the relationship between patterns of collaboration and learning.
Resumo:
The long term goal of the work described is to contribute to the emerging literature of prevention science in general, and to school-based psychoeducational interventions in particular. The psychoeducational intervention reported in this study used a main effects prevention intervention model. The current study focused on promoting optimal cognitive and affective functioning. The goal of this intervention was to increase potential protective factors such as critical cognitive and communicative competencies (e.g., critical problem solving and decision making) and affective competencies (e.g., personal control and responsibility) in middle adolescents who have been identified by the school system as being at-risk for problem behaviors. The current psychoeducational intervention draws on an ongoing program of theory and research (Berman, Berman, Cass Lorente, Ferrer Wreder, Arrufat, & Kurtines 1996; Ferrer Wreder, 1996; Kurtines, Berman, Ittel, & Williamson, 1995) and extends it to include Freire's (1970) concept of transformative pedagogy in developing school-based psychoeducational programs that target troubled adolescents. The results of the quantitative and qualitative analyses indicated trends that were generally encouraging with respect to the effects of the intervention on increasing critical cognitive and affective competencies. ^
Resumo:
The purpose of this study was to examine the perspectives of three graduates of a problem-based leaning (PBL) physical therapy (PT) program about their clinical practice. The study used the qualitative methods of observations, interviews, and journaling to gather the data. Three sessions of audiotaped interviews and two observation sessions were conducted with three exemplars from Nova Southeastern University PBL PT program. Each participant also maintained a reflective journal. The data were analyzed using content analysis. A systematic filing system was used by employing a mechanical means of maintaining and indexing coded data and sorting data into coded classifications of subtopics or themes. All interview transcripts, field notes from observations, and journal accounts were read, and index sheets were appropriately annotated. From the findings of the study, it was noted that, from the participants' perspectives, they were practicing at typically expected levels as clinicians. The attributes that governed the perspectives of the participants about their physical therapy clinical practice included flexibility, reflection, analysis, decision-making, self-reliance, problem-solving, independent thinking, and critical thinking. Further, the findings indicated that the factors that influenced those attributes included the PBL process, parents' value system, self-reliant personality, innate personality traits, and deliberate choice. Finally, the findings indicated that the participants' perspectives, for the most part, appeared to support the espoused efficacy of the PBL educational approach. In conclusion, there is evidence that the physical therapy clinical practice of the participants were positively impacted by the PBL curriculum. Among the many attributes they noted which governed these perspectives, problem-solving, as postulated by Barrows, was one of the most frequently mentioned benefits gained from their PBL PT training. With more schools adopting the PBL approach, this research will hopefully add to the knowledge base regarding the efficacy of embracing a problem-based learning instructional approach in physical therapy programs. ^
Resumo:
Numerical optimization is a technique where a computer is used to explore design parameter combinations to find extremes in performance factors. In multi-objective optimization several performance factors can be optimized simultaneously. The solution to multi-objective optimization problems is not a single design, but a family of optimized designs referred to as the Pareto frontier. The Pareto frontier is a trade-off curve in the objective function space composed of solutions where performance in one objective function is traded for performance in others. A Multi-Objective Hybridized Optimizer (MOHO) was created for the purpose of solving multi-objective optimization problems by utilizing a set of constituent optimization algorithms. MOHO tracks the progress of the Pareto frontier approximation development and automatically switches amongst those constituent evolutionary optimization algorithms to speed the formation of an accurate Pareto frontier approximation. Aerodynamic shape optimization is one of the oldest applications of numerical optimization. MOHO was used to perform shape optimization on a 0.5-inch ballistic penetrator traveling at Mach number 2.5. Two objectives were simultaneously optimized: minimize aerodynamic drag and maximize penetrator volume. This problem was solved twice. The first time the problem was solved by using Modified Newton Impact Theory (MNIT) to determine the pressure drag on the penetrator. In the second solution, a Parabolized Navier-Stokes (PNS) solver that includes viscosity was used to evaluate the drag on the penetrator. The studies show the difference in the optimized penetrator shapes when viscosity is absent and present in the optimization. In modern optimization problems, objective function evaluations may require many hours on a computer cluster to perform these types of analysis. One solution is to create a response surface that models the behavior of the objective function. Once enough data about the behavior of the objective function has been collected, a response surface can be used to represent the actual objective function in the optimization process. The Hybrid Self-Organizing Response Surface Method (HYBSORSM) algorithm was developed and used to make response surfaces of objective functions. HYBSORSM was evaluated using a suite of 295 non-linear functions. These functions involve from 2 to 100 variables demonstrating robustness and accuracy of HYBSORSM.
Resumo:
This research is motivated by a practical application observed at a printed circuit board (PCB) manufacturing facility. After assembly, the PCBs (or jobs) are tested in environmental stress screening (ESS) chambers (or batch processing machines) to detect early failures. Several PCBs can be simultaneously tested as long as the total size of all the PCBs in the batch does not violate the chamber capacity. PCBs from different production lines arrive dynamically to a queue in front of a set of identical ESS chambers, where they are grouped into batches for testing. Each line delivers PCBs that vary in size and require different testing (or processing) times. Once a batch is formed, its processing time is the longest processing time among the PCBs in the batch, and its ready time is given by the PCB arriving last to the batch. ESS chambers are expensive and a bottleneck. Consequently, its makespan has to be minimized. ^ A mixed-integer formulation is proposed for the problem under study and compared to a formulation recently published. The proposed formulation is better in terms of the number of decision variables, linear constraints and run time. A procedure to compute the lower bound is proposed. For sparse problems (i.e. when job ready times are dispersed widely), the lower bounds are close to optimum. ^ The problem under study is NP-hard. Consequently, five heuristics, two metaheuristics (i.e. simulated annealing (SA) and greedy randomized adaptive search procedure (GRASP)), and a decomposition approach (i.e. column generation) are proposed—especially to solve problem instances which require prohibitively long run times when a commercial solver is used. Extensive experimental study was conducted to evaluate the different solution approaches based on the solution quality and run time. ^ The decomposition approach improved the lower bounds (or linear relaxation solution) of the mixed-integer formulation. At least one of the proposed heuristic outperforms the Modified Delay heuristic from the literature. For sparse problems, almost all the heuristics report a solution close to optimum. GRASP outperforms SA at a higher computational cost. The proposed approaches are viable to implement as the run time is very short. ^
Resumo:
With the advantages and popularity of Permanent Magnet (PM) motors due to their high power density, there is an increasing incentive to use them in variety of applications including electric actuation. These applications have strict noise emission standards. The generation of audible noise and associated vibration modes are characteristics of all electric motors, it is especially problematic in low speed sensorless control rotary actuation applications using high frequency voltage injection technique. This dissertation is aimed at solving the problem of optimizing the sensorless control algorithm for low noise and vibration while achieving at least 12 bit absolute accuracy for speed and position control. The low speed sensorless algorithm is simulated using an improved Phase Variable Model, developed and implemented in a hardware-in-the-loop prototyping environment. Two experimental testbeds were developed and built to test and verify the algorithm in real time.^ A neural network based modeling approach was used to predict the audible noise due to the high frequency injected carrier signal. This model was created based on noise measurements in an especially built chamber. The developed noise model is then integrated into the high frequency based sensorless control scheme so that appropriate tradeoffs and mitigation techniques can be devised. This will improve the position estimation and control performance while keeping the noise below a certain level. Genetic algorithms were used for including the noise optimization parameters into the developed control algorithm.^ A novel wavelet based filtering approach was proposed in this dissertation for the sensorless control algorithm at low speed. This novel filter was capable of extracting the position information at low values of injection voltage where conventional filters fail. This filtering approach can be used in practice to reduce the injected voltage in sensorless control algorithm resulting in significant reduction of noise and vibration.^ Online optimization of sensorless position estimation algorithm was performed to reduce vibration and to improve the position estimation performance. The results obtained are important and represent original contributions that can be helpful in choosing optimal parameters for sensorless control algorithm in many practical applications.^
Resumo:
This research is motivated by the need for considering lot sizing while accepting customer orders in a make-to-order (MTO) environment, in which each customer order must be delivered by its due date. Job shop is the typical operation model used in an MTO operation, where the production planner must make three concurrent decisions; they are order selection, lot size, and job schedule. These decisions are usually treated separately in the literature and are mostly led to heuristic solutions. The first phase of the study is focused on a formal definition of the problem. Mathematical programming techniques are applied to modeling this problem in terms of its objective, decision variables, and constraints. A commercial solver, CPLEX is applied to solve the resulting mixed-integer linear programming model with small instances to validate the mathematical formulation. The computational result shows it is not practical for solving problems of industrial size, using a commercial solver. The second phase of this study is focused on development of an effective solution approach to this problem of large scale. The proposed solution approach is an iterative process involving three sequential decision steps of order selection, lot sizing, and lot scheduling. A range of simple sequencing rules are identified for each of the three subproblems. Using computer simulation as the tool, an experiment is designed to evaluate their performance against a set of system parameters. For order selection, the proposed weighted most profit rule performs the best. The shifting bottleneck and the earliest operation finish time both are the best scheduling rules. For lot sizing, the proposed minimum cost increase heuristic, based on the Dixon-Silver method performs the best, when the demand-to-capacity ratio at the bottleneck machine is high. The proposed minimum cost heuristic, based on the Wagner-Whitin algorithm is the best lot-sizing heuristic for shops of a low demand-to-capacity ratio. The proposed heuristic is applied to an industrial case to further evaluate its performance. The result shows it can improve an average of total profit by 16.62%. This research contributes to the production planning research community with a complete mathematical definition of the problem and an effective solution approach to solving the problem of industry scale.