917 resultados para Automatic Control Theory
Resumo:
In this Bachelor Thesis I want to provide readers with tools and scripts for the control of a 7DOF manipulator, backed up by some theory of Robotics and Computer Science, in order to better contextualize the work done. In practice, we will see most common software, and developing environments, used to cope with our task: these include ROS, along with visual simulation by VREP and RVIZ, and an almost "stand-alone" ROS extension called MoveIt!, a very complete programming interface for trajectory planning and obstacle avoidance. As we will better appreciate and understand in the introduction chapter, the capability of detecting collision objects through a camera sensor, and re-plan to the desired end-effector pose, are not enough. In fact, this work is implemented in a more complex system, where recognition of particular objects is needed. Through a package of ROS and customized scripts, a detailed procedure will be provided on how to distinguish a particular object, retrieve its reference frame with respect to a known one, and then allow navigation to that target. Together with technical details, the aim is also to report working scripts and a specific appendix (A) you can refer to, if desiring to put things together.
Resumo:
Erasure control coding has been exploited in communication networks with an aim to improve the end-to-end performance of data delivery across the network. To address the concerns over the strengths and constraints of erasure coding schemes in this application, we examine the performance limits of two erasure control coding strategies, forward erasure recovery and adaptive erasure recovery. Our investigation shows that the throughput of a network using an (n, k) forward erasure control code is capped by r =k/n when the packet loss rate p ≤ (te/n) and by k(l-p)/(n-te) when p > (t e/n), where te is the erasure control capability of the code. It also shows that the lower bound of the residual loss rate of such a network is (np-te)/(n-te) for (te/n) < p ≤ 1. Especially, if the code used is maximum distance separable, the Shannon capacity of the erasure channel, i.e. 1-p, can be achieved and the residual loss rate is lower bounded by (p+r-1)/r, for (1-r) < p ≤ 1. To address the requirements in real-time applications, we also investigate the service completion time of different schemes. It is revealed that the latency of the forward erasure recovery scheme is fractionally higher than that of the scheme without erasure control coding or retransmission mechanisms (using UDP), but much lower than that of the adaptive erasure scheme when the packet loss rate is high. Results on comparisons between the two erasure control schemes exhibit their advantages as well as disadvantages in the role of delivering end-to-end services. To show the impact of the bounds derived on the end-to-end performance of a TCP/IP network, a case study is provided to demonstrate how erasure control coding could be used to maximize the performance of practical systems. © 2010 IEEE.
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
While a great amount of attention is being given to the development of nanodevices, both through academic research and private industry, the field is still on the verge. Progress hinges upon the development of tools and components that can precisely control the interaction between light and matter, and that can be efficiently integrated into nano-devices. Nanofibers are one of the most promising candidates for such purposes. However, in order to fully exploit their potential, a more intimate knowledge of how nanofibers interact with single neutral atoms must be gained. As we learn more about the properties of nanofiber modes, and the way they interface with atoms, and as the technology develops that allows them to be prepared with more precisely known properties, they become more and more adaptable and effective. The work presented in this thesis touches on many topics, which is testament to the broad range of applications and high degree of promise that nanofibers hold. For immediate use, we need to fully grasp how they can be best implemented as sensors, filters, detectors, and switches in existing nano-technologies. Areas of interest also include how they might be best exploited for probing atom-surface interactions, single-atom detection and single photon generation. Nanofiber research is also motivated by their potential integration into fundamental cold atom quantum experiments, and the role they can play there. Combining nanofibers with existing optical and quantum technologies is a powerful strategy for advancing areas like quantum computation, quantum information processing, and quantum communication. In this thesis I present a variety of theoretical work, which explores a range of the applications listed above. The first work presented concerns the use of the evanescent fields around a nanofiber to manipulate an existing trapping geometry and therefore influence the centre-of-mass dynamics of the atom. The second work presented explores interesting trapping geometries that can be achieved in the vicinity of a fiber in which just four modes are allowed to propagate. In a third study I explore the use of a nanofiber as a detector of small numbers of photons by calculating the rate of emission into the fiber modes when the fiber is moved along next to a regularly separated array of atoms. Also included are some results from a work in progress, where I consider the scattered field that appears along the nanofiber axis when a small number of atoms trapped along that axis are illuminated orthogonally; some interesting preliminary results are outlined. Finally, in contrast with the rest of the thesis, I consider some interesting physics that can be done in one of the trapping geometries that can be created around the fiber, here I explore the ground states of a phase separated two-component superfluid Bose-Einstein condensate trapped in a toroidal potential.
Resumo:
The purpose of this study was to assess the intention to exercise among ethnically and racially diverse community college students using the Theory of Planned Behavior (TPB). In addition to identifying the variables associated with motivation or intention of college students to engage in physical activity, this study tested the model of the Theory of Planned Behavior, asking: Does the TPB model explain intention to exercise among a racially/ethnically diverse group of college students? The relevant variables were the TPB constructs (behavioral beliefs, normative beliefs, and control beliefs), which combined to form a measure of intention to exercise. Structural Equation Modeling was used to test the predictive power of the TPB constructs for predicting intention to exercise. Following procedures described by Ajzen (2002), the researcher developed a questionnaire encompassing the external variables of student demographics (age, gender, work status, student status, socio-economic status, access to exercise facilities, and past behavior), major constructs of the TPB, and two questions from the Godin Leisure Time Questionnaire (GLTQ; Godin & Shephard, 1985). Participants were students (N = 255) who enrolled in an on-campus wellness course at an urban community college. The demographic profile of the sample revealed a racially/ethnically diverse study population. The original model that was used to reflect the TPB as developed by Ajzen was not supported by the data analyzed using SEM; however, a revised model that the researcher thought was theoretically a more accurate reflection of the causal relations between the TPB constructs was supported. The GLTQ questions were problematic for some students; those data could not be used in the modeling efforts. The GLTQ measure, however, revealed a significant correlation with intention to exercise (r = .27, p = .001). Post-hoc comparisons revealed significant differences in normative beliefs and attitude toward exercising behavior between Black students and Hispanic students. Compared to Black students, Hispanic students were more likely to (a) perceive “friends” as approving of them being physically active and (b) rate being physically active for 30 minutes per day as “beneficial”. No statistically significant difference was found among groups on overall intention to exercise.
Resumo:
English has been taught as a core and compulsory subject in China for decades. Recently, the demand for English in China has increased dramatically. China now has the world’s largest English-learning population. The traditional English-teaching method cannot continue to be the only approach because it merely focuses on reading, grammar and translation, which cannot meet English learners and users’ needs (i.e., communicative competence and skills in speaking and writing). This study was conducted to investigate if the Picture-Word Inductive Model (PWIM), a new pedagogical method using pictures and inductive thinking, would benefit English learners in China in terms of potential higher output in speaking and writing. With the gauge of Cognitive Load Theory (CLT), specifically, its redundancy effect, I investigated whether processing words and a picture concurrently would present a cognitive overload for English learners in China. I conducted a mixed methods research study. A quasi-experiment (pretest, intervention for seven weeks, and posttest) was conducted using 234 students in four groups in Lianyungang, China (58 fourth graders and 57 seventh graders as an experimental group with PWIM and 59 fourth graders and 60 seventh graders as a control group with the traditional method). No significant difference in the effects of PWIM was found on vocabulary acquisition based on grade levels. Observations, questionnaires with open-ended questions, and interviews were deployed to answer the three remaining research questions. A few students felt cognitively overloaded when they encountered too many writing samples, too many new words at one time, repeated words, mismatches between words and pictures, and so on. Many students listed and exemplified numerous strengths of PWIM, but a few mentioned weaknesses of PWIM. The students expressed the idea that PWIM had a positive effect on their English teaching. As integrated inferences, qualitative findings were used to explain the quantitative results that there were no significant differences of the effects of the PWIM between the experimental and control groups in both grade levels, from four contextual aspects: time constraints on PWIM implementation, teachers’ resistance, how to use PWIM and PWIM implemented in a classroom over 55 students.
Resumo:
The electric vehicle (EV) market has seen a rapid growth in the recent past. With an increase in the number of electric vehicles on road, there is an increase in the number of high capacity battery banks interfacing the grid. The battery bank of an EV, besides being the fuel tank, is also a huge energy storage unit. Presently, it is used only when the vehicle is being driven and remains idle for rest of the time, rendering it underutilized. Whereas on the other hand, there is a need of large energy storage units in the grid to filter out the fluctuations of supply and demand during a day. EVs can help bridge this gap. The EV battery bank can be used to store the excess energy from the grid to vehicle (G2V) or supply stored energy from the vehicle to grid (V2G ), when required. To let power flow happen, in both directions, a bidirectional AC-DC converter is required. This thesis concentrates on the bidirectional AC-DC converters which have a control on power flow in all four quadrants for the application of EV battery interfacing with the grid. This thesis presents a bidirectional interleaved full bridge converter topology. This helps in increasing the power processing and current handling capability of the converter which makes it suitable for the purpose of EVs. Further, the benefit of using the interleaved topology is that it increases the power density of the converter. This ensures optimization of space usage with the same power handling capacity. The proposed interleaved converter consists of two full bridges. The corresponding gate pulses of each switch, in one cell, are phase shifted by 180 degrees from those of the other cell. The proposed converter control is based on the one-cycle controller. To meet the challenge of new requirements of reactive power handling capabilities for grid connected converters, posed by the utilities, the controller is modified to make it suitable to process the reactive power. A fictitious current derived from the grid voltage is introduced in the controller, which controls the converter performance. The current references are generated using the second order generalized integrators (SOGI) and phase locked loop (PLL). A digital implementation of the proposed control ii scheme is developed and implemented using DSP hardware. The simulated and experimental results, based on the converter topology and control technique discussed here, are presented to show the performance of the proposed theory.
Resumo:
This chapter addresses the issue of language standardization from two perspectives, bringing together a theoretical perspective offered by the discipline of sociolinguistics with a practical example from international business. We introduce the broad concept of standardization and embed the study of language standardization in the wider discussion of standards as a means of control across society. We analyse the language policy and practice of the Danish multinational, Grundfos, and use it as a “sociolinguistic laboratory” to “test” the theory of language standardization initially elaborated by Einar Haugen to explain the history of modern Norwegian. The table is then turned and a model from International Business by Piekkari, Welch and Welch is used to illuminate recent Norwegian language planning. It is found that the Grundfos case works well with the Haugen model, and the International Business model provides a valuable practical lesson for national language planners, both showing that a “comparative standardology” is a valuable undertaking. More voices “at the table” will allow both theory and practice to be further refined and for the role of standards across society to be better understood.
Resumo:
The pharmaceutical industry wields disproportionate power and control within the medical economy of knowledge where the desire for profit considerably outweighs health for its own sake. Utilizing the theoretical tools of political philosophy, this project restructures the economy of medical knowledge in order to lessen the oligarchical control possessed by the pharmaceutical industry. Ultimately, this project argues that an economy of medical knowledge structured around communitarian political theory lessens the current power dynamic without taking an anti-capitalist stance. Arising from the core commitments of communitarian-liberalism, the production, distribution, and consumption of medical knowledge all become guided processes seeking to realize the common good of quality healthcare. This project also considers two other theoretical approaches: liberalism and egalitarianism. A Medical knowledge economy structured around liberal political theory is ultimately rejected as it empowers the oligarchical status quo. Egalitarian political theory is able to significantly reduce the power imbalance problem but simultaneously renders inconsequential medical knowledge; therefore, it is also rejected.
Resumo:
An array of Bio-Argo floats equipped with radiometric sensors has been recently deployed in various open ocean areas representative of the diversity of trophic and bio-optical conditions prevailing in the so-called Case 1 waters. Around solar noon and almost everyday, each float acquires 0-250 m vertical profiles of Photosynthetically Available Radiation and downward irradiance at three wavelengths (380, 412 and 490 nm). Up until now, more than 6500 profiles for each radiometric channel have been acquired. As these radiometric data are collected out of operator’s control and regardless of meteorological conditions, specific and automatic data processing protocols have to be developed. Here, we present a data quality-control procedure aimed at verifying profile shapes and providing near real-time data distribution. This procedure is specifically developed to: 1) identify main issues of measurements (i.e. dark signal, atmospheric clouds, spikes and wave-focusing occurrences); 2) validate the final data with a hierarchy of tests to ensure a scientific utilization. The procedure, adapted to each of the four radiometric channels, is designed to flag each profile in a way compliant with the data management procedure used by the Argo program. Main perturbations in the light field are identified by the new protocols with good performances over the whole dataset. This highlights its potential applicability at the global scale. Finally, the comparison with modeled surface irradiances allows assessing the accuracy of quality-controlled measured irradiance values and identifying any possible evolution over the float lifetime due to biofouling and instrumental drift.
Resumo:
An array of Bio-Argo floats equipped with radiometric sensors has been recently deployed in various open ocean areas representative of the diversity of trophic and bio-optical conditions prevailing in the so-called Case 1 waters. Around solar noon and almost everyday, each float acquires 0-250 m vertical profiles of Photosynthetically Available Radiation and downward irradiance at three wavelengths (380, 412 and 490 nm). Up until now, more than 6500 profiles for each radiometric channel have been acquired. As these radiometric data are collected out of operator’s control and regardless of meteorological conditions, specific and automatic data processing protocols have to be developed. Here, we present a data quality-control procedure aimed at verifying profile shapes and providing near real-time data distribution. This procedure is specifically developed to: 1) identify main issues of measurements (i.e. dark signal, atmospheric clouds, spikes and wave-focusing occurrences); 2) validate the final data with a hierarchy of tests to ensure a scientific utilization. The procedure, adapted to each of the four radiometric channels, is designed to flag each profile in a way compliant with the data management procedure used by the Argo program. Main perturbations in the light field are identified by the new protocols with good performances over the whole dataset. This highlights its potential applicability at the global scale. Finally, the comparison with modeled surface irradiances allows assessing the accuracy of quality-controlled measured irradiance values and identifying any possible evolution over the float lifetime due to biofouling and instrumental drift.
Resumo:
Expanding on the growing movement to take academic and other erudite subjugated knowledges and distill them into some graphic form, this “cartoon” is a recounting of the author’s 2014 article, “Big Data, Actionable Information, Scientific Knowledge and the Goal of Control,” Teknokultura, Vol. 11/no. 3, pp. 529-54. It is an analysis of the idea of Big Data and an argument that its power relies on its instrumentalist specificity and not its extent. Mind control research in general and optogenetics in particular are the case study. Noir seems an appropriate aesthetic for this analysis, so direct quotes from the article are illustrated by publically available screen shots from iconic and unknown films of the 20th century. The only addition to the original article is a framing insight from the admirable activist network CrimethInc.
Resumo:
The construction industry requires quality control and regulation of its contingent,unpredictable environment. However, taking too much control from workers candisempower and demotivate. In the 1970s Deci and Ryan developed selfdeterminationtheory which states that in order to be intrinsically motivated, threecomponents are necessary - competence, autonomy and relatedness. This study aimsto examine the way in which the three ‘nutriments’ for intrinsic motivation may beundermined by heavy-handed quality control. A critical literature review analysesconstruction, psychological and management research regarding the control andmotivation of workers, using self-determination theory as a framework. Initialfindings show that quality management systems do not always work as designed.Workers perceive that unnecessary, wasteful and tedious counter checking of theirwork implies that they are not fully trusted by management to work without oversight.Control of workers and pressure for continual improvement may lead to resistanceand deception. Controlling mechanisms can break the link between performance andsatisfaction, reducing motivation and paradoxically reducing the likelihood of thequality they intend to promote. This study will lead to a greater understanding ofcontrol and motivation, facilitating further research into improvements in theapplication of quality control to maintain employee motivation.
Resumo:
Background: We sought to describe the theory used to design treatment adherence interventions, the content delivered, and the mode of delivery of these interventions in chronic respiratory disease. Methods: We included randomized controlled trials of adherence interventions (compared to another intervention or control) in adults with chronic respiratory disease (8 databases searched; inception until March 2015). Two reviewers screened and extracted data: post-intervention adherence (measured objectively); behavior change theory, content (grouped into psychological, education and self-management/supportive, telemonitoring, shared decision-making); and delivery. “Effective” studies were those with p < 0.05 for adherence rate between groups. We conducted a narrative synthesis and assessed risk of bias. Results: 12,488 articles screened; 46 included studies (n = 42,91% in OSA or asthma) testing 58 interventions (n = 27, 47% were effective). Nineteen (33%) interventions (15 studies) used 12 different behavior change theories. Use of theory (n = 11,41%) was more common amongst effective interventions. Interventions were mainly educational, self-management or supportive interventions (n = 27,47%). They were commonly delivered by a doctor (n = 20,23%), in face-to-face (n = 48,70%), one-to-one (n = 45,78%) outpatient settings (n = 46,79%) across 2–5 sessions (n = 26,45%) for 1–3 months (n = 26,45%). Doctors delivered a lower proportion (n = 7,18% vs n = 13,28%) and pharmacists (n = 6,15% vs n = 1,2%) a higher proportion of effective than ineffective interventions. Risk of bias was high in >1 domain (n = 43, 93%) in most studies. Conclusions: Behavior change theory was more commonly used to design effective interventions. Few adherence interventions have been developed using theory, representing a gap between intervention design recommendations and research practice.
Resumo:
Les enfants d’âge préscolaire (≤ 5 ans) sont plus à risque de subir un traumatisme crânio-cérébral (TCC) que les enfants plus agés, et 90% de ces TCC sont de sévérité légère (TCCL). De nombreuses études publiées dans les deux dernières décennies démontrent que le TCCL pédiatrique peut engendrer des difficultés cognitives, comportementales et psychiatriques en phase aigüe qui, chez certains enfants, peuvent perdurer à long terme. Il existe une littérature florissante concernant l'impact du TCCL sur le fonctionnement social et sur la cognition sociale (les processus cognitifs qui sous-tendent la socialisation) chez les enfants d'âge scolaire et les adolescents. Or, seulement deux études ont examiné l'impact d'un TCCL à l'âge préscolaire sur le développement social et aucune étude ne s'est penchée sur les répercussions socio-cognitives d'un TCCL précoce (à l’âge préscolaire). L'objectif de la présente thèse était donc d'étudier les conséquences du TCCL en bas âge sur la cognition sociale. Pour ce faire, nous avons examiné un aspect de la cognition sociale qui est en plein essor à cet âge, soit la théorie de l'esprit (TE), qui réfère à la capacité de se mettre à la place d'autrui et de comprendre sa perspective. Le premier article avait pour but d'étudier deux sous-composantes de la TE, soit la compréhension des fausses croyances et le raisonnement des désirs et des émotions d'autrui, six mois post-TCCL. Les résultats indiquent que les enfants d'âge préscolaire (18 à 60 mois) qui subissent un TCCL ont une TE significativement moins bonne 6 mois post-TCCL comparativement à un groupe contrôle d'enfants n'ayant subi aucune blessure. Le deuxième article visait à éclaircir l'origine de la diminution de la TE suite à un TCCL précoce. Cet objectif découle du débat qui existe actuellement dans la littérature. En effet, plusieurs scientifiques sont d'avis que l'on peut conclure à un effet découlant de la blessure au cerveau seulement lorsque les enfants ayant subi un TCCL sont comparés à des enfants ayant subi une blessure n'impliquant pas la tête (p.ex., une blessure orthopédique). Cet argument est fondé sur des études qui démontrent qu'en général, les enfants qui sont plus susceptibles de subir une blessure, peu importe la nature de celle-ci, ont des caractéristiques cognitives pré-existantes (p.ex. impulsivité, difficultés attentionnelles). Il s'avère donc possible que les difficultés que nous croyons attribuables à la blessure cérébrale étaient présentes avant même que l'enfant ne subisse un TCCL. Dans cette deuxième étude, nous avons donc comparé les performances aux tâches de TE d'enfants ayant subi un TCCL à ceux d'enfants appartenant à deux groupes contrôles, soit des enfants n'ayant subi aucune blessure et à des pairs ayant subi une blessure orthopédique. De façon générale, les enfants ayant subi un TCCL ont obtenu des performances significativement plus faibles à la tâche évaluant le raisonnement des désirs et des émotions d'autrui, 6 mois post-blessure, comparativement aux deux groupes contrôles. Cette étude visait également à examiner l'évolution de la TE suite à un TCCL, soit de 6 mois à 18 mois post-blessure. Les résultats démontrent que les moindres performances sont maintenues 18 mois post-TCCL. Enfin, le troisième but de cette étude était d’investiguer s’il existe un lien en la performance aux tâches de TE et les habiletés sociales, telles qu’évaluées à l’aide d’un questionnaire rempli par le parent. De façon intéressante, la TE est associée aux habiletés sociales seulement chez les enfants ayant subi un TCCL. Dans l'ensemble, ces deux études mettent en évidence des répercussions spécifiques du TCCL précoce sur la TE qui persistent à long terme, et une TE amoindrie seraient associée à de moins bonnes habiletés sociales. Cette thèse démontre qu'un TCCL en bas âge peut faire obstacle au développement sociocognitif, par le biais de répercussions sur la TE. Ces résultats appuient la théorie selon laquelle le jeune cerveau immature présente une vulnérabilité accrue aux blessures cérébrales. Enfin, ces études mettent en lumière la nécessité d'étudier ce groupe d'âge, plutôt que d'extrapoler à partir de résultats obtenus avec des enfants plus âgés, puisque les enjeux développementaux s'avèrent différents, et que ceux-ci ont potentiellement une influence majeure sur les répercussions d'une blessure cérébrale sur le fonctionnement sociocognitif.