884 resultados para Optimal control problem
Resumo:
Pulse Response Based Control (PRBC) is a recently developed minimum time control method for flexible structures. The flexible behavior of the structure is represented through a set of discrete time sequences, which are the responses of the structure due to rectangular force pulses. The rectangular force pulses are given by the actuators that control the structure. The set of pulse responses, desired outputs, and force bounds form a numerical optimization problem. The solution of the optimization problem is a minimum time piecewise constant control sequence for driving the system to a desired final state. The method was developed for driving positive semi-definite systems. In case the system is positive definite, some final states of the system may not be reachable. Necessary conditions for reachability of the final states are derived for systems with a finite number of degrees of freedom. Numerical results are presented that confirm the derived analytical conditions. Numerical simulations of maneuvers of distributed parameter systems have shown a relationship between the error in the estimated minimum control time and sampling interval
Resumo:
Industrial applications demand that robots operate in agreement with the position and orientation of their end effector. It is necessary to solve the kinematics inverse problem. This allows the displacement of the joints of the manipulator to be determined, to accomplish a given objective. Complete studies of dynamical control of joint robotics are also necessary. Initially, this article focuses on the implementation of numerical algorithms for the solution of the kinematics inverse problem and the modeling and simulation of dynamic systems. This is done using real time implementation. The modeling and simulation of dynamic systems are performed emphasizing off-line programming. In sequence, a complete study of the control strategies is carried out through the study of several elements of a robotic joint, such as: DC motor, inertia, and gearbox. Finally a trajectory generator, used as input for a generic group of joints, is developed and a proposal of the controller's implementation of joints, using EPLD development system, is presented.
Resumo:
In this paper, the optimum design of 3R manipulators is formulated and solved by using an algebraic formulation of workspace boundary. A manipulator design can be approached as a problem of optimization, in which the objective functions are the size of the manipulator and workspace volume; and the constrains can be given as a prescribed workspace volume. The numerical solution of the optimization problem is investigated by using two different numerical techniques, namely, sequential quadratic programming and simulated annealing. Numerical examples illustrate a design procedure and show the efficiency of the proposed algorithms.
Resumo:
The main topic of the thesis is optimal stopping. This is treated in two research articles. In the first article we introduce a new approach to optimal stopping of general strong Markov processes. The approach is based on the representation of excessive functions as expected suprema. We present a variety of examples, in particular, the Novikov-Shiryaev problem for Lévy processes. In the second article on optimal stopping we focus on differentiability of excessive functions of diffusions and apply these results to study the validity of the principle of smooth fit. As an example we discuss optimal stopping of sticky Brownian motion. The third research article offers a survey like discussion on Appell polynomials. The crucial role of Appell polynomials in optimal stopping of Lévy processes was noticed by Novikov and Shiryaev. They described the optimal rule in a large class of problems via these polynomials. We exploit the probabilistic approach to Appell polynomials and show that many classical results are obtained with ease in this framework. In the fourth article we derive a new relationship between the generalized Bernoulli polynomials and the generalized Euler polynomials.
Resumo:
Papperstillverkningen störs ofta av oönskade föreningar som kan bilda avsättningar på processytor, vilket i sin tur kan ge upphov till störningar i pappersproduktionen samt försämring av papperskvaliteten. Förutom avsättningar av vedharts är stenliknande avlagringar av svårlösliga salter vanliga. I vårt dagliga liv är kalkavlagringar i kaffe- och vattenkokare exempel på liknande problem. I massa- och pappersindustrin är en av de mest problematiska föreningarna kalciumoxalat; detta salt är nästan olösligt i vatten och avlagringarna är mycket svåra att avlägsna. Kalciumoxalat är också känt som en av orsakerna till njurstenar hos människor. Veden och speciellt barken innehåller alltid en viss mängd oxalat men en större källa är oxalsyra som bildas när massan bleks med oxiderande kemikalier, t.ex. väteperoxid. Kalciumoxalat bildas när oxalsyran reagerar med kalcium som kommer in i processen med råvattnet, veden eller olika tillsatsmedel. I denna avhandling undersöktes faktorer som påverkar bildningen av oxalsyra och utfällningen av kalciumoxalat, med hjälp av bleknings- och utfällningsexperiment. Forskningens fokus låg speciellt på olika sätt att förebygga uppkomsten av avlagringar vid tillverkning av trähaltigt papper. Resultaten i denna avhandling visar att bildningen av oxalsyra samt utfällning av kalciumoxalat kan påverkas genom processtekniska och våtändskemiska metoder. Noggrann avbarkning av veden, kontrollerade förhållanden under den alkaliska peroxidblekningen, noggrann hantering och kontroll av andra lösta och kolloidala substanser, samt utnyttjande av skräddarsydd kemi för kontroll av avlagringar är nyckelfaktorer. Resultaten kan utnyttjas då man planerar blekningssekvenser för olika massor samt för att lösa problem orsakade av kalciumoxalat. Forskningsmetoderna som användes i utfällningsstudierna samt för utvärdering av tillsatsmedel kan också utnyttjas inom andra områden, t.ex. bryggeri- och sockerindustrin, där kalciumoxalatproblem är vanligt förekommande. -------------------------------------------- Paperinvalmistusta häiritsevät usein erilaiset epäpuhtaudet, jotka kiinnittyvät prosessipinnoille ja haittaavat tuotantoa sekä paperin laatua. Puun pihkan lisäksi eräs yleinen ongelma on niukkaliukoisten suolojen aiheuttamat kivettymät. Kalkkisaostuma kahvinkeittimessä on esimerkki vastaavasta ongelmasta arkielämässä. Massa- ja paperiteollisuudessa yksi hankalimmista kivettymien muodostajista on kalsiumoksalaatti, koska se on lähes liukenematonta ja sen aiheuttamat saostumat ovat erittäin vaikeasti poistettavia. Kalsiumoksalaatti on yleisesti tunnettu myös munuaiskivien aiheuttajana ihmisillä. Puu ja varsinkin sen kuori sisältää aina jonkin verran oksalaattia, mutta suurempi lähde on kuitenkin oksaalihappo jota muodostuu valkaistaessa massaa hapettavilla kemikaaleilla, kuten vetyperoksidilla. Kalsiumoksalaattia syntyy kun veden, puun ja lisäaineiden mukana prosessiin tuleva kalsium reagoi oksalaatin kanssa. Tässä väitöskirjatyössä tutkittiin oksaalihapon muodostumiseen ja kalsiumoksalaatin saostumiseen vaikuttavia tekijöitä valkaisu- ja saostumiskokeiden avulla. Tutkimuksen painopiste oli saostumien ehkäisemisessä puupitoisten painopaperien valmistuksessa. Työssä saadut tulokset osoittavat että oksaalihapon muodostumiseen ja kalsiumoksalaatin saostumiseen voidaan vaikuttaa sekä prosessiteknisten että märänpään kemian keinojen avulla. Tehokas puun kuorinta, optimoidut olosuhteet peroksidivalkaisussa, muiden liuenneiden ja kolloidisten aineiden hallinta sekä räätälöidyn kemian hyödyntäminen kalsiumoksalaattisaostumien torjunnassa ovat keskeisissä rooleissa ongelmien välttämiseksi. Väitöskirjatyön tuloksia voidaan hyödyntää massan valkaisulinjoja suunniteltaessa sekä kalsiumoksalaatin aiheuttamien ongelmien ratkaisemisessa. Tutkimusmenetelmiä, joita käytettiin saostumiskokeissa ja eri lisäaineiden vaikutusten arvioinnissa, voidaan hyödyntää massa- ja paperiteollisuuden lisäksi myös muilla alueilla, kuten sokeri- ja panimoteollisuudessa, joissa ongelma on myös yleinen.
Resumo:
The resistance of barnyardgrass (Echinochloa crus-galli) to imidazolinone herbicides is a worldwide problem in paddy fields. A rapid diagnosis is required for the selection of adequate prevention and control practices. The objectives of this study were to develop expedite bioassays to identify the resistance to imidazolinone herbicides in barnyardgrass and to evaluate the efficacy of alternative herbicides for the post-emergence control of resistant biotypes. Three experiments were conducted to develop methods for diagnosis of resistance to imazethapyr and imazapyr + imazapic in barnyardgrass at the seed, seedling and tiller stages, and to carry out a pot experiment to determine the efficacy of six herbicides applied at post-emergence in 13 biotypes of barnyardgrass resistant to imidazolinones. The seed soaking bioassay was not able to differentiate the resistant and susceptible biotypes. The resistance of barnyardgrass to imidazolinones was effectively discriminated in the seedlings and tiller bioassays seven days after incubation at the concentrations of 0.001 and 0.0001 mM, respectively, for both imazethapyr and imazapyr + imazapic. The biotypes identified as resistant to imidazolinones showed different patterns of susceptibility to penoxsulam, bispyribac-sodium and pyrazosulfuron-ethyl, and were all controlled with profoxydim and cyhalofop-butyl. The seedling and tiller bioassays are effective in the diagnosis of barnyardgrass resistance to imidazolinone herbicides, providing an on-season opportunity to identify the need to use alternative herbicides to be applied at post-emergence for the control of the resistant biotypes.
Resumo:
The objectives of this study were to evaluate baby corn yield, green corn yield, and grain yield in corn cultivar BM 3061, with weed control achieved via a combination of hoeing and intercropping with gliricidia, and determine how sample size influences weed growth evaluation accuracy. A randomized block design with ten replicates was used. The cultivar was submitted to the following treatments: A = hoeings at 20 and 40 days after corn sowing (DACS), B = hoeing at 20 DACS + gliricidia sowing after hoeing, C = gliricidia sowing together with corn sowing + hoeing at 40 DACS, D = gliricidia sowing together with corn sowing, and E = no hoeing. Gliricidia was sown at a density of 30 viable seeds m-2. After harvesting the mature ears, the area of each plot was divided into eight sampling units measuring 1.2 m² each to evaluate weed growth (above-ground dry biomass). Treatment A provided the highest baby corn, green corn, and grain yields. Treatment B did not differ from treatment A with respect to the yield values for the three products, and was equivalent to treatment C for green corn yield, but was superior to C with regard to baby corn weight and grain yield. Treatments D and E provided similar yields and were inferior to the other treatments. Therefore, treatment B is a promising one. The relation between coefficient of experimental variation (CV) and sample size (S) to evaluate growth of the above-ground part of the weeds was given by the equation CV = 37.57 S-0.15, i.e., CV decreased as S increased. The optimal sample size indicated by this equation was 4.3 m².
Resumo:
One of the main challenges in Software Engineering is to cope with the transition from an industry based on software as a product to software as a service. The field of Software Engineering should provide the necessary methods and tools to develop and deploy new cost-efficient and scalable digital services. In this thesis, we focus on deployment platforms to ensure cost-efficient scalability of multi-tier web applications and on-demand video transcoding service for different types of load conditions. Infrastructure as a Service (IaaS) clouds provide Virtual Machines (VMs) under the pay-per-use business model. Dynamically provisioning VMs on demand allows service providers to cope with fluctuations on the number of service users. However, VM provisioning must be done carefully, because over-provisioning results in an increased operational cost, while underprovisioning leads to a subpar service. Therefore, our main focus in this thesis is on cost-efficient VM provisioning for multi-tier web applications and on-demand video transcoding. Moreover, to prevent provisioned VMs from becoming overloaded, we augment VM provisioning with an admission control mechanism. Similarly, to ensure efficient use of provisioned VMs, web applications on the under-utilized VMs are consolidated periodically. Thus, the main problem that we address is cost-efficient VM provisioning augmented with server consolidation and admission control on the provisioned VMs. We seek solutions for two types of applications: multi-tier web applications that follow the request-response paradigm and on-demand video transcoding that is based on video streams with soft realtime constraints. Our first contribution is a cost-efficient VM provisioning approach for multi-tier web applications. The proposed approach comprises two subapproaches: a reactive VM provisioning approach called ARVUE and a hybrid reactive-proactive VM provisioning approach called Cost-efficient Resource Allocation for Multiple web applications with Proactive scaling. Our second contribution is a prediction-based VM provisioning approach for on-demand video transcoding in the cloud. Moreover, to prevent virtualized servers from becoming overloaded, the proposed VM provisioning approaches are augmented with admission control approaches. Therefore, our third contribution is a session-based admission control approach for multi-tier web applications called adaptive Admission Control for Virtualized Application Servers. Similarly, the fourth contribution in this thesis is a stream-based admission control and scheduling approach for on-demand video transcoding called Stream-Based Admission Control and Scheduling. Our fifth contribution is a computation and storage trade-o strategy for cost-efficient video transcoding in cloud computing. Finally, the sixth and the last contribution is a web application consolidation approach, which uses Ant Colony System to minimize the under-utilization of the virtualized application servers.
Resumo:
The increased awareness and evolved consumer habits have set more demanding standards for the quality and safety control of food products. The production of foodstuffs which fulfill these standards can be hampered by different low-molecular weight contaminants. Such compounds can consist of, for example residues of antibiotics in animal use or mycotoxins. The extremely small size of the compounds has hindered the development of analytical methods suitable for routine use, and the methods currently in use require expensive instrumentation and qualified personnel to operate them. There is a need for new, cost-efficient and simple assay concepts which can be used for field testing and are capable of processing large sample quantities rapidly. Immunoassays have been considered as the golden standard for such rapid on-site screening methods. The introduction of directed antibody engineering and in vitro display technologies has facilitated the development of novel antibody based methods for the detection of low-molecular weight food contaminants. The primary aim of this study was to generate and engineer antibodies against low-molecular weight compounds found in various foodstuffs. The three antigen groups selected as targets of antibody development cause food safety and quality defects in wide range of products: 1) fluoroquinolones: a family of synthetic broad-spectrum antibacterial drugs used to treat wide range of human and animal infections, 2) deoxynivalenol: type B trichothecene mycotoxin, a widely recognized problem for crops and animal feeds globally, and 3) skatole, or 3-methyindole is one of the two compounds responsible for boar taint, found in the meat of monogastric animals. This study describes the generation and engineering of antibodies with versatile binding properties against low-molecular weight food contaminants, and the consecutive development of immunoassays for the detection of the respective compounds.
Resumo:
The investments have always been considered as an essential backbone and so-called ‘locomotive’ for the competitive economies. However, in various countries, the state has been put under tight budget constraints for the investments in capital intensive projects. In response to this situation, the cooperation between public and private sector has grown based on public-private mechanism. The promotion of favorable arrangement for collaboration between public and private sectors for the provision of policies, services, and infrastructure in Russia can help to address the problems of dry ports development that neither municipalities nor the private sector can solve alone. Especially, the stimulation of public-private collaboration is significant under the exposure to externalities that affect the magnitude of the risks during all phases of project realization. In these circumstances, the risk in the projects also is becoming increasingly a part of joint research and risk management practice, which is viewed as a key approach, aiming to take active actions on existing global and specific factors of uncertainties. Meanwhile, a relatively little progress has been made on the inclusion of the resilience aspects into the planning process of a dry ports construction that would instruct the capacity planner, on how to mitigate the occurrence of disruptions that may lead to million dollars of losses due to the deviation of the future cash flows from the expected financial flows on the project. The current experience shows that the existing methodological base is developed fragmentary within separate steps of supply chain risk management (SCRM) processes: risk identification, risk evaluation, risk mitigation, risk monitoring and control phases. The lack of the systematic approach hinders the solution of the problem of risk management processes of dry port implementation. Therefore, management of various risks during the investments phases of dry port projects still presents a considerable challenge from the practical and theoretical points of view. In this regard, the given research became a logical continuation of fundamental research, existing in the financial models and theories (e.g., capital asset pricing model and real option theory), as well as provided a complementation for the portfolio theory. The goal of the current study is in the design of methods and models for the facilitation of dry port implementation through the mechanism of public-private partnership on the national market that implies the necessity to mitigate, first and foremost, the shortage of the investments and consequences of risks. The problem of the research was formulated on the ground of the identified contradictions. They rose as a continuation of the trade-off between the opportunities that the investors can gain from the development of terminal business in Russia (i.e. dry port implementation) and risks. As a rule, the higher the investment risk, the greater should be their expected return. However, investors have a different tolerance for the risks. That is why it would be advisable to find an optimum investment. In the given study, the optimum relates to the search for the efficient portfolio, which can provide satisfaction to the investor, depending on its degree of risk aversion. There are many theories and methods in finance, concerning investment choices. Nevertheless, the appropriateness and effectiveness of particular methods should be considered with the allowance of the specifics of the investment projects. For example, the investments in dry ports imply not only the lump sum of financial inflows, but also the long-term payback periods. As a result, capital intensity and longevity of their construction determine the necessity from investors to ensure the return on investment (profitability), along with the rapid return on investment (liquidity), without precluding the fact that the stochastic nature of the project environment is hardly described by the formula-based approach. The current theoretical base for the economic appraisals of the dry port projects more often perceives net present value (NPV) as a technique superior to other decision-making criteria. For example, the portfolio theory, which considers different risk preference of an investor and structures of utility, defines net present value as a better criterion of project appraisal than discounted payback period (DPP). Meanwhile, in business practice, the DPP is more popular. Knowing that the NPV is based on the assumptions of certainty of project life, it cannot be an accurate appraisal approach alone to determine whether or not the project should be accepted for the approval in the environment that is not without of uncertainties. In order to reflect the period or the project’s useful life that is exposed to risks due to changes in political, operational, and financial factors, the second capital budgeting criterion – discounted payback period is profoundly important, particularly for the Russian environment. Those statements represent contradictions that exist in the theory and practice of the applied science. Therefore, it would be desirable to relax the assumptions of portfolio theory and regard DPP as not fewer relevant appraisal approach for the assessment of the investment and risk measure. At the same time, the rationality of the use of both project performance criteria depends on the methods and models, with the help of which these appraisal approaches are calculated in feasibility studies. The deterministic methods cannot ensure the required precision of the results, while the stochastic models guarantee the sufficient level of the accuracy and reliability of the obtained results, providing that the risks are properly identified, evaluated, and mitigated. Otherwise, the project performance indicators may not be confirmed during the phase of project realization. For instance, the economic and political instability can result in the undoing of hard-earned gains, leading to the need for the attraction of the additional finances for the project. The sources of the alternative investments, as well as supportive mitigation strategies, can be studied during the initial phases of project development. During this period, the effectiveness of the investments undertakings can also be improved by the inclusion of the various investors, e.g. Russian Railways’ enterprises and other private companies in the dry port projects. However, the evaluation of the effectiveness of the participation of different investors in the project lack the methods and models that would permit doing the particular feasibility study, foreseeing the quantitative characteristics of risks and their mitigation strategies, which can meet the tolerance of the investors to the risks. For this reason, the research proposes a combination of Monte Carlo method, discounted cash flow technique, the theory of real options, and portfolio theory via a system dynamics simulation approach. The use of this methodology allows for comprehensive risk management process of dry port development to cover all aspects of risk identification, risk evaluation, risk mitigation, risk monitoring, and control phases. A designed system dynamics model can be recommended for the decision-makers on the dry port projects that are financed via a public-private partnership. It permits investors to make a decision appraisal based on random variables of net present value and discounted payback period, depending on different risks factors, e.g. revenue risks, land acquisition risks, traffic volume risks, construction hazards, and political risks. In this case, the statistical mean is used for the explication of the expected value of the DPP and NPV; the standard deviation is proposed as a characteristic of risks, while the elasticity coefficient is applied for rating of risks. Additionally, the risk of failure of project investments and guaranteed recoupment of capital investment can be considered with the help of the model. On the whole, the application of these modern methods of simulation creates preconditions for the controlling of the process of dry port development, i.e. making managerial changes and identifying the most stable parameters that contribute to the optimal alternative scenarios of the project realization in the uncertain environment. System dynamics model allows analyzing the interactions in the most complex mechanism of risk management process of the dry ports development and making proposals for the improvement of the effectiveness of the investments via an estimation of different risk management strategies. For the comparison and ranking of these alternatives in their order of preference to the investor, the proposed indicators of the efficiency of the investments, concerning the NPV, DPP, and coefficient of variation, can be used. Thus, rational investors, who averse to taking increased risks unless they are compensated by the commensurate increase in the expected utility of a risky prospect of dry port development, can be guided by the deduced marginal utility of investments. It is computed on the ground of the results from the system dynamics model. In conclusion, the outlined theoretical and practical implications for the management of risks, which are the key characteristics of public-private partnerships, can help analysts and planning managers in budget decision-making, substantially alleviating the effect from various risks and avoiding unnecessary cost overruns in dry port projects.
Resumo:
The aim of this Master’s thesis is to find out how should internal control be structured in a Finnish retail company in order to fulfil the requirements set out in the Finnish Corporate Governance Code and to be value adding for the company as well as to analyse the added value that a structured and centrally led internal control can provide for the case company. The underlying fundamental theoretical framework of the study essentially stems from the theory of the firm; the agent-principal problem is the primary motivator for internal control. Regulatory requirements determine the thresholds that the internal control of a company must reach. The research was carried out as a case study and methodically the study is qualitative and the empirical data gathering was conducted by interviews and by participant observation. The data gathered (processes, controls etc.) is used to understand the control environment of the company and to assess the current state of internal control. Deficiencies and other points of development identified are then discussed.
Resumo:
The Chinese welding industry is growing every year due to rapid development of the Chinese economy. Increasingly, companies around the world are looking to use Chinese enterprises as their cooperation partners. However, the Chinese welding industry also has its weaknesses, such as relatively low quality and weak management. A modern, advanced welding management system appropriate for local socio-economic conditions is required to enable Chinese enterprises to enhance further their business development. The thesis researches the design and implementation of a new welding quality management system for China. This new system is called ‗welding production quality control management model in China‘ (WQMC). Constructed on the basis of analysis of a survey and in-company interviews, the welding management system comprises the following different elements and perspectives: a ‗Localized congenital existing problem resolution strategies‘ (LCEPRS) database, a ‗human factor designed training system‘ (HFDT) training strategy, the theory of modular design, ISO 3834 requirements, total welding management (TWM), and lean manufacturing (LEAN) theory. The methods used in the research are literature review, questionnaires, interviews, and the author‘s model design experiences and observations, i.e. the approach is primarily qualitative and phenomenological. The thesis describes the design and implementation of a HFDT strategy in Chinese welding companies. Such training is an effective way to increase employees‘ awareness of quality and issues associated with quality assurance. The study identified widely existing problems in the Chinese welding industry and constructed a LCEPRS database that can be used in efforts to mitigate and avoid common problems. The work uses the theory of modular design, TWM and LEAN as tools for the implementation of the WQMC system.
Resumo:
The purpose of this study was to determine the effect that calculators have on the attitudes and numerical problem-solving skills of primary students. The sample used for this research was one of convenience. The sample consisted of two grade 3 classes within the York Region District School Board. The students in the experimental group used calculators for this problem-solving unit. The students in the control group completed the same numerical problem-solving unit without the use of calculators. The pretest-posttest control group design was used for this study. All students involved in this study completed a computational pretest and an attitude pretest. At the end of the study, the students completed a computational posttest. Five students from the experimental group and five students from the control group received their posttests in the form of a taped interview. At the end of the unit, all students completed the attitude scale that they had received before the numerical problem-solving unit once again. Data for qualitative analysis included anecdotal observations, journal entries, and transcribed interviews. The constant comparative method was used to analyze the qualitative data. A t test was also performed on the data to determine whether there were changes in test and attitude scores between the control and experimental group. Overall, the findings of this study support the hypothesis that calculators improve the attitudes of primary students toward mathematics. Also, there is some evidence to suggest that calculators improve the computational skills of grade 3 students.
Resumo:
The relevance of attentional measures to cognitive and social adaptive behaviour was examined in an adolescent sample. Unlike previous research, the influence of both inhibitory and facilitory aspects of attention were studied. In addition, contributions made by these attentional processes were compared with traditional psychometric measures of cognitive functioning. Data were gathered from 36 grade 10 and 1 1 high school students (20 male and 16 female students) with a variety of learning and attentional difficulties. Data collection was conducted in the course of two testing sessions. In the first session, students completed questionnaires regarding their medical history, and everyday behaviours (the Brock Adaptive Functioning Questionnaire), along with non-verbal problem solving tasks and motor speed tasks. In the second session, students performed working memory measures and computer-administered tasks assessing inhibitory and facilitory aspects of attention. Grades and teacher-rated measures of cognitive and social impulsivity were also gathered. Results indicate that attentional control has both cognitive and social/emotional implications. Performance on negative priming and facilitation trials from the Flanker task predicted grades in core courses, social functioning measures, and cognitive and social impulsivity ratings. However, beneficial effects for academic and social functioning associated with inhibition were less prevalent in those demonstrating a greater ability to respond to facilitory cues. There was also some evidence that high levels of facilitation were less beneficial to academic performance, and female students were more likely to exceed optimal levels of facilitory processing. Furthermore, lower negative priming was ''S'K 'i\':y-: -'*' - r " j«v ; ''*.' iij^y Inhibition, Facilitation and Social Competence 3 associated with classroom-rated distraction and hyperactivity, but the relationship between inhibition and social aspects of impulsivity was stronger for adolescents with learning or reading problems, and the relationship between inhibition and cognitive impulsivity was stronger for male students. In most cases, attentional measures were predictive of performance outcomes independent of traditional psychometric measures of cognitive functioning. >,, These findings provide support for neuropsychological models linking inhibition to control of interference and arousal, and emphasize the fundamental role of attention in everyday adolescent activities. The findings also warrant further investigation into the ways which inhibitory and facilitory attentional processes interact, and the contextdependent nature of attentional control.associated with classroom-rated distraction and hyperactivity, but the relationship between inhibition and social aspects of impulsivity was stronger for adolescents with learning or reading problems, and the relationship between inhibition and cognitive impulsivity was stronger for male students. In most cases, attentional measures were predictive of performance outcomes independent of traditional psychometric measures of cognitive functioning. >,, These findings provide support for neuropsychological models linking inhibition to control of interference and arousal, and emphasize the fundamental role of attention in everyday adolescent activities. The findings also warrant further investigation into the ways which inhibitory and facilitory attentional processes interact, and the contextdependent nature of attentional control.
Resumo:
The quantitative component of this study examined the effect of computerassisted instruction (CAI) on science problem-solving performance, as well as the significance of logical reasoning ability to this relationship. I had the dual role of researcher and teacher, as I conducted the study with 84 grade seven students to whom I simultaneously taught science on a rotary-basis. A two-treatment research design using this sample of convenience allowed for a comparison between the problem-solving performance of a CAI treatment group (n = 46) versus a laboratory-based control group (n = 38). Science problem-solving performance was measured by a pretest and posttest that I developed for this study. The validity of these tests was addressed through critical discussions with faculty members, colleagues, as well as through feedback gained in a pilot study. High reliability was revealed between the pretest and the posttest; in this way, students who tended to score high on the pretest also tended to score high on the posttest. Interrater reliability was found to be high for 30 randomly-selected test responses which were scored independently by two raters (i.e., myself and my faculty advisor). Results indicated that the form of computer-assisted instruction (CAI) used in this study did not significantly improve students' problem-solving performance. Logical reasoning ability was measured by an abbreviated version of the Group Assessment of Lx)gical Thinking (GALT). Logical reasoning ability was found to be correlated to problem-solving performance in that, students with high logical reasoning ability tended to do better on the problem-solving tests and vice versa. However, no significant difference was observed in problem-solving improvement, in the laboratory-based instruction group versus the CAI group, for students varying in level of logical reasoning ability.Insignificant trends were noted in results obtained from students of high logical reasoning ability, but require further study. It was acknowledged that conclusions drawn from the quantitative component of this study were limited, as further modifications of the tests were recommended, as well as the use of a larger sample size. The purpose of the qualitative component of the study was to provide a detailed description ofmy thesis research process as a Brock University Master of Education student. My research journal notes served as the data base for open coding analysis. This analysis revealed six main themes which best described my research experience: research interests, practical considerations, research design, research analysis, development of the problem-solving tests, and scoring scheme development. These important areas ofmy thesis research experience were recounted in the form of a personal narrative. It was noted that the research process was a form of problem solving in itself, as I made use of several problem-solving strategies to achieve desired thesis outcomes.