958 resultados para Unobserved-component model
Resumo:
To study Assessing the impact of tillage practices on soil carbon losses dependents it is necessary to describe the temporal variability of soil CO2 emission after tillage. It has been argued that large amounts of CO2 emitted after tillage may serve as an indicator for longer-term changes in soil carbon stocks. Here we present a two-step function model based on soil temperature and soil moisture including an exponential decay in time component that is efficient in fitting intermediate-term emission after disk plow followed by a leveling harrow (conventional), and chisel plow coupled with a roller for clod breaking (reduced) tillage. Emission after reduced tillage was described using a non-linear estimator with determination coefficient (R²) as high as 0.98. Results indicate that when emission after tillage is addressed it is important to consider an exponential decay in time in order to predict the impact of tillage in short-term emissions.
Resumo:
Abstract—This paper discusses existing military capability models and proposes a comprehensive capability meta-model (CCMM) which unites the existing capability models into an integrated and hierarchical whole. The Zachman Framework for Enterprise Architecture is used as a structure for the CCMM. The CCMM takes into account the abstraction level, the primary area of application, stakeholders, intrinsic process, and life cycle considerations of each existing capability model, and shows how the models relate to each other. The validity of the CCMM was verified through a survey of subject matter experts. The results suggest that the CCMM is of practical value to various capability stakeholders in many ways, such as helping to improve communication between the different capability communities.
Resumo:
Abstract - This paper reviews existing military capability models and the capability life cycle. It proposes a holistic capability life-cycle model (HCLCM) that combines capability systems with related capability models. ISO 15288 standard is used as a framework to construct the HCLCM. The HCLCM also shows how capability models and systems relate to each other throughout the capability life cycle. The main contribution of this paper is conceptual in nature. The model complements the existing, but still evolving, understanding of the military capability life cycle in a holistic and systemic way. The model also increases understanding and facilitates communication among various military capability stakeholders.
Resumo:
This study examines the structure of the Russian Reflexive Marker ( ся/-сь) and offers a usage-based model building on Construction Grammar and a probabilistic view of linguistic structure. Traditionally, reflexive verbs are accounted for relative to non-reflexive verbs. These accounts assume that linguistic structures emerge as pairs. Furthermore, these accounts assume directionality where the semantics and structure of a reflexive verb can be derived from the non-reflexive verb. However, this directionality does not necessarily hold diachronically. Additionally, the semantics and the patterns associated with a particular reflexive verb are not always shared with the non-reflexive verb. Thus, a model is proposed that can accommodate the traditional pairs as well as for the possible deviations without postulating different systems. A random sample of 2000 instances marked with the Reflexive Marker was extracted from the Russian National Corpus and the sample used in this study contains 819 unique reflexive verbs. This study moves away from the traditional pair account and introduces the concept of Neighbor Verb. A neighbor verb exists for a reflexive verb if they share the same phonological form excluding the Reflexive Marker. It is claimed here that the Reflexive Marker constitutes a system in Russian and the relation between the reflexive and neighbor verbs constitutes a cross-paradigmatic relation. Furthermore, the relation between the reflexive and the neighbor verb is argued to be of symbolic connectivity rather than directionality. Effectively, the relation holding between particular instantiations can vary. The theoretical basis of the present study builds on this assumption. Several new variables are examined in order to systematically model variability of this symbolic connectivity, specifically the degree and strength of connectivity between items. In usage-based models, the lexicon does not constitute an unstructured list of items. Instead, items are assumed to be interconnected in a network. This interconnectedness is defined as Neighborhood in this study. Additionally, each verb carves its own niche within the Neighborhood and this interconnectedness is modeled through rhyme verbs constituting the degree of connectivity of a particular verb in the lexicon. The second component of the degree of connectivity concerns the status of a particular verb relative to its rhyme verbs. The connectivity within the neighborhood of a particular verb varies and this variability is quantified by using the Levenshtein distance. The second property of the lexical network is the strength of connectivity between items. Frequency of use has been one of the primary variables in functional linguistics used to probe this. In addition, a new variable called Constructional Entropy is introduced in this study building on information theory. It is a quantification of the amount of information carried by a particular reflexive verb in one or more argument constructions. The results of the lexical connectivity indicate that the reflexive verbs have statistically greater neighborhood distances than the neighbor verbs. This distributional property can be used to motivate the traditional observation that the reflexive verbs tend to have idiosyncratic properties. A set of argument constructions, generalizations over usage patterns, are proposed for the reflexive verbs in this study. In addition to the variables associated with the lexical connectivity, a number of variables proposed in the literature are explored and used as predictors in the model. The second part of this study introduces the use of a machine learning algorithm called Random Forests. The performance of the model indicates that it is capable, up to a degree, of disambiguating the proposed argument construction types of the Russian Reflexive Marker. Additionally, a global ranking of the predictors used in the model is offered. Finally, most construction grammars assume that argument construction form a network structure. A new method is proposed that establishes generalization over the argument constructions referred to as Linking Construction. In sum, this study explores the structural properties of the Russian Reflexive Marker and a new model is set forth that can accommodate both the traditional pairs and potential deviations from it in a principled manner.
Resumo:
Cancer anemia is classified as an anemia of chronic diseases, although it is sometimes the first symptom of cancer. Cancer anemia includes a hemolytic component, important in the terminal stage when even transfused cells are rapidly destroyed. The presence of a chronic component and the terminal complications of the illness limit studies of the hemolytic component. A multifocal model of tumor growth was used here to simulate the terminal metastatic dissemination stage (several simultaneous inoculations of Walker 256 cells). The hemolytic component of anemia began 3-4 days after inoculation in 100% of the rats and progressed rapidly thereafter: Hb levels dropped from 14.9 ± 0.02 to 8.7 ± 0.06 from days 7 to 11 (~5 times the physiologically normal rate in rats) in the absence of bleeding. The development of anemia was correlated (r2 = 0.86) with the development of other systemic effects such as anorexia. There was a significant decrease in the osmotic fragility of circulating erythrocytes: the NaCl concentration causing 50% lysis was reduced from 4.52 ± 0.06 to 4.10 ± 0.01 (P<0.01) on day 7, indicating a reduction in erythrocyte volume. However, with mild metabolic stress (4-h incubation at 37oC), the erythrocytes showed a greater increase in osmotic fragility than the controls, suggesting marked alteration of erythrocyte homeostasis. These effects may be due to primary plasma membrane alterations (transport and/or permeability) and/or may be secondary to metabolic changes. This multifocal model is adequate for studying the hemolytic component of cancer anemia since it is rapid, highly reproducible and causes minimal animal suffering.
Resumo:
Didanosine (ddI) is a component of highly active antiretroviral therapy drug combinations, used especially in resource-limited settings and in zidovudine-resistant patients. The population pharmacokinetics of ddI was evaluated in 48 healthy volunteers enrolled in two bioequivalence studies. These data, along with a set of co-variates, were the subject of a nonlinear mixed-effect modeling analysis using the NONMEM program. A two-compartment model with first order absorption (ADVAN3 TRANS3) was fitted to the serum ddI concentration data. Final pharmacokinetic parameters, expressed as functions of the co-variates gender and creatinine clearance (CL CR), were: oral clearance (CL = 55.1 + 240 x CL CR + 16.6 L/h for males and CL = 55.1 + 240 x CL CR for females), central volume (V2 = 9.8 L), intercompartmental clearance (Q = 40.9 L/h), peripheral volume (V3 = 62.7 + 22.9 L for males and V3 = 62.7 L for females), absorption rate constant (Ka = 1.51/h), and dissolution time of the tablet (D = 0.43 h). The intraindividual (residual) variability expressed as coefficient of variation was 13.0%, whereas the interindividual variability of CL, Q, V3, Ka, and D was 20.1, 75.8, 20.6, 18.9, and 38.2%, respectively. The relatively high (>30%) interindividual variability for some of these parameters, observed under the controlled experimental settings of bioequivalence trials in healthy volunteers, may result from genetic variability of the processes involved in ddI absorption and disposition.
Resumo:
Simultaneous measurements of EEG-functional magnetic resonance imaging (fMRI) combine the high temporal resolution of EEG with the distinctive spatial resolution of fMRI. The purpose of this EEG-fMRI study was to search for hemodynamic responses (blood oxygen level-dependent - BOLD responses) associated with interictal activity in a case of right mesial temporal lobe epilepsy before and after a successful selective amygdalohippocampectomy. Therefore, the study found the epileptogenic source by this noninvasive imaging technique and compared the results after removing the atrophied hippocampus. Additionally, the present study investigated the effectiveness of two different ways of localizing epileptiform spike sources, i.e., BOLD contrast and independent component analysis dipole model, by comparing their respective outcomes to the resected epileptogenic region. Our findings suggested a right hippocampus induction of the large interictal activity in the left hemisphere. Although almost a quarter of the dipoles were found near the right hippocampus region, dipole modeling resulted in a widespread distribution, making EEG analysis too weak to precisely determine by itself the source localization even by a sophisticated method of analysis such as independent component analysis. On the other hand, the combined EEG-fMRI technique made it possible to highlight the epileptogenic foci quite efficiently.
Resumo:
(+)-Dehydrofukinone (DHF) is a major component of the essential oil of Nectandra grandiflora (Lauraceae), and exerts a depressant effect on the central nervous system of fish. However, the neuronal mechanism underlying DHF action remains unknown. This study aimed to investigate the action of DHF on GABAA receptors using a silver catfish (Rhamdia quelen) model. Additionally, we investigated the effect of DHF exposure on stress-induced cortisol modulation. Chemical identification was performed using gas chromatography-mass spectrometry and purity was evaluated using gas chromatography with a flame ionization detector. To an aquarium, we applied between 2.5 and 50 mg/L DHF diluted in ethanol, in combination with 42.7 mg/L diazepam. DHF within the range of 10-20 mg/L acted collaboratively in combination with diazepam, but the sedative action of DHF was reversed by 3 mg/L flumazenil. Additionally, fish exposed for 24 h to 2.5-20 mg/L DHF showed no side effects and there was sustained sedation during the first 12 h of drug exposure with 10-20 mg/L DHF. DHF pretreatment did not increase plasma cortisol levels in fish subjected to a stress protocol. Moreover, the stress-induced cortisol peak was absent following pretreatment with 20 mg/L DHF. DHF proved to be a relatively safe sedative or anesthetic, which interacts with GABAergic and cortisol pathways in fish.
Resumo:
Software is a key component in many of our devices and products that we use every day. Most customers demand not only that their devices should function as expected but also that the software should be of high quality, reliable, fault tolerant, efficient, etc. In short, it is not enough that a calculator gives the correct result of a calculation, we want the result instantly, in the right form, with minimal use of battery, etc. One of the key aspects for succeeding in today's industry is delivering high quality. In most software development projects, high-quality software is achieved by rigorous testing and good quality assurance practices. However, today, customers are asking for these high quality software products at an ever-increasing pace. This leaves the companies with less time for development. Software testing is an expensive activity, because it requires much manual work. Testing, debugging, and verification are estimated to consume 50 to 75 per cent of the total development cost of complex software projects. Further, the most expensive software defects are those which have to be fixed after the product is released. One of the main challenges in software development is reducing the associated cost and time of software testing without sacrificing the quality of the developed software. It is often not enough to only demonstrate that a piece of software is functioning correctly. Usually, many other aspects of the software, such as performance, security, scalability, usability, etc., need also to be verified. Testing these aspects of the software is traditionally referred to as nonfunctional testing. One of the major challenges with non-functional testing is that it is usually carried out at the end of the software development process when most of the functionality is implemented. This is due to the fact that non-functional aspects, such as performance or security, apply to the software as a whole. In this thesis, we study the use of model-based testing. We present approaches to automatically generate tests from behavioral models for solving some of these challenges. We show that model-based testing is not only applicable to functional testing but also to non-functional testing. In its simplest form, performance testing is performed by executing multiple test sequences at once while observing the software in terms of responsiveness and stability, rather than the output. The main contribution of the thesis is a coherent model-based testing approach for testing functional and performance related issues in software systems. We show how we go from system models, expressed in the Unified Modeling Language, to test cases and back to models again. The system requirements are traced throughout the entire testing process. Requirements traceability facilitates finding faults in the design and implementation of the software. In the research field of model-based testing, many new proposed approaches suffer from poor or the lack of tool support. Therefore, the second contribution of this thesis is proper tool support for the proposed approach that is integrated with leading industry tools. We o er independent tools, tools that are integrated with other industry leading tools, and complete tool-chains when necessary. Many model-based testing approaches proposed by the research community suffer from poor empirical validation in an industrial context. In order to demonstrate the applicability of our proposed approach, we apply our research to several systems, including industrial ones.
Resumo:
Corporations practice company acquisitions in order to create shareholder’s value. During the last few decades, the companies in emerging markets have become active in the acquisition business. During the last decade, large and significant acquisitions have occurred especially in automotive industry. While domestic markets have become too competitive and companies are lacking required capabilities, they seek possibilities to expand into Western markets by attaining valuable assets through acquisitions of developed country corporations. This study discusses the issues and characteristics of these acquisitions through case studies. The purpose of this study was to identify the acquisition motives and strategies for post-transaction brand and product integration as well as analyze the effect of the motives to the integration strategy. The cases chosen for the research were Chinese Geely acquiring Swedish Volvo in 2010 and Indian Tata Motors buying British Jaguar Land Rover in 2008. The main topics were chosen according to their significance for companies in automotive industry as well as those are most visible parts for consumers. The study is based on qualitative case study methods, analyzing secondary data from academic papers and news articles as well as companies’ own announcements e.g. stock exchange and press releases. The study finds that the companies in the cases mainly possessed asset-seeking and market-seeking motives. In addition, the findings refer to rather minimal post-acquisition brand and product integration strategies. Mainly the parent companies left the target company autonomous to make their own business strategies and decisions. The most noticeable integrations were in the product development and production processes. Through restructuring the product architectures, the companies were able to share components and technology between product families and brands, which results in cutting down costs and in increase of profitability and efficiency. In the Geely- Volvo case, the strategy focused more on component sharing and product development know-how, whereas in Tata Motors-Jaguar Land Rover case, the main actions were to cut down costs through component sharing and combine production and distribution networks especially in Asian markets. However, it was evident that in both cases the integration and technology sharing were executed cautiously to prevent on harming the valuable image of the luxury brand. This study has concluded that the asset-seeking motives have significant influence on the posttransaction brand and model line-up integration strategies. By taking a cautious approach in acquiring assets, such as luxury brand, the companies in the cases have implemented a successful post-acquisition strategy and managed to create value for the shareholders at least in short-term. Yritykset harjoittavat yritysostoja luodakseen osakkeenomistajille lisäarvoa. Viimeisten muutamien vuosikymmenien aikana yritykset kehittyvissä maissa ovat myös aktivoituneet yritysostoissa. Viimeisen vuosikymmenen aikana erityisesti autoteollisuudessa on esiintynyt suuria ja merkittäviä yritysostoja. Koska kilpailu kotimaan markkinoilla on kiristynyt ja yritykset ovat vailla vaadittavia valmiuksia, ne etsivät mahdollisuuksiaan laajentaa länsimaisiin markkinoihin hankkimalla arvokkaita etuja kehittyneiden maiden yrityksistä yritysostojen avulla. Tämä tutkimus pohtii näiden yritysostojen olennaisia kysymyksiä ja ominaisuuksia casetutkimuksien kautta. Tutkimuksen tarkoitus oli tunnistaa sekä yritysostojen motiiveja ja brändi- ja mallisto-integraation strategioita että analysoida kyseisten motiivien vaikutusta integraatiostrategiaan. Tapaus-tutkimuksiksi valittiin kiinalaisen Geelyn yritysosto ruotsalaisesta Volvosta vuonna 2010 ja intialaisen Tata Motorsin yritysosto englantilaisesta Jaguar Land Roverista vuonna 2008. Tutkimus on kvalitatiivinen case-tutkimus ja siinä analysoidaan toissijaista tietoa sekä akateemisten ja uutisartikkeleiden että yritysten omien ilmoitusten, kuten pörssi- ja lehdistötiedotteiden, kautta. Tutkimuksen tulokset osoittavat, että tutkittujen yritysten toiminnat perustuivat motiiveihin, joita ajoivat etujen and uusien markkinoiden tarve. Sen lisäksi tutkimustulokset osoittivat, että yritysoston jälkeinen brändi- ja mallisto-integraatio pidettiin minimaalisena. Pääasiallisesti kohdeyrityksille jätettiin autonomia tehdä omat liikkeenjohdolliset päätökset yritysstrategioihin liittyen. Huomattavimmat integraatiot koskivat tuotekehityksellisiä ja tuotannollisia prosesseja. Kehittämällä uudelleen tuotearkkitehtuureja, yritykset pystyivät jakamaan komponentteja ja teknologiaa tuoteperheiden ja brändien välillä. Tämä mahdollisti kustannusleikkauksia sekä kannattavuuden ja tehokkuuden parantamista. Geely-Volvo –tapauksessa integraatiostrategia keskittyi komponenttien jakamiseen yhteisten tuotearkkitehtuurien avulla ja tuotekehityksen ammattitaitoon, kun taas Tata Motors-JLR –tapauksessa päätoiminnat olivat kustannuksien leikkaus sekä tuotannon ja jakeluverkoston yhdistäminen erityisesti Aasian maissa. Yhteistä yrityskaupoissa oli, että brändi- ja mallisto-integraatio sekä teknologian jakaminen suoritettiin varoen ehkäistäkseen arvokkaiden luksus-brändien tuotekuvan vahingoittamista. Tutkimuksen lopputulokset osoittavat, että yrityskaupan motiiveilla on huomattava vaikutus brändija mallisto-integraation strategiaan. Toteuttamalla varovaista lähestymistapaa luksus-brändin hankinnassa ja integraatiossa, yritykset ovat onnistuneet luomaan lisäarvoa osakkeenomistajille vähintään lyhyellä aikavälillä.
Resumo:
The importance of industrial maintenance has been emphasized during the last decades; it is no longer a mere cost item, but one of the mainstays of business. Market conditions have worsened lately, investments in production assets have decreased, and at the same time competition has changed from taking place between companies to competition between networks. Companies have focused on their core functions and outsourced support services, like maintenance, above all to decrease costs. This new phenomenon has led to increasing formation of business networks. As a result, a growing need for new kinds of tools for managing these networks effectively has arisen. Maintenance costs are usually a notable part of the life-cycle costs of an item, and it is important to be able to plan the future maintenance operations for the strategic period of the company or for the whole life-cycle period of the item. This thesis introduces an itemlevel life-cycle model (LCM) for industrial maintenance networks. The term item is used as a common definition for a part, a component, a piece of equipment etc. The constructed LCM is a working tool for a maintenance network (consisting of customer companies that buy maintenance services and various supplier companies). Each network member is able to input their own cost and profit data related to the maintenance services of one item. As a result, the model calculates the net present values of maintenance costs and profits and presents them from the points of view of all the network members. The thesis indicates that previous LCMs for calculating maintenance costs have often been very case-specific, suitable only for the item in question, and they have also been constructed for the needs of a single company, without the network perspective. The developed LCM is a proper tool for the decision making of maintenance services in the network environment; it enables analysing the past and making scenarios for the future, and offers choices between alternative maintenance operations. The LCM is also suitable for small companies in building active networks to offer outsourcing services for large companies. The research introduces also a five-step constructing process for designing a life-cycle costing model in the network environment. This five-step designing process defines model components and structure throughout the iteration and exploitation of user feedback. The same method can be followed to develop other models. The thesis contributes to the literature of value and value elements of maintenance services. It examines the value of maintenance services from the perspective of different maintenance network members and presents established value element lists for the customer and the service provider. These value element lists enable making value visible in the maintenance operations of a networked business. The LCM added with value thinking promotes the notion of maintenance from a “cost maker” towards a “value creator”.
Resumo:
The present thesis study is a systematic investigation of information processing at sleep onset, using auditory event-related potentials (ERPs) as a test of the neurocognitive model of insomnia. Insomnia is an extremely prevalent disorder in society resulting in problems with daytime functioning (e.g., memory, concentration, job performance, mood, job and driving safety). Various models have been put forth in an effort to better understand the etiology and pathophysiology of this disorder. One of the newer models, the neurocognitive model of insomnia, suggests that chronic insomnia occurs through conditioned central nervous system arousal. This arousal is reflected through increased information processing which may interfere with sleep initiation or maintenance. The present thesis employed event-related potentials as a direct method to test information processing during the sleep-onset period. Thirteen poor sleepers with sleep-onset insomnia and 1 2 good sleepers participated in the present study. All poor sleepers met the diagnostic criteria for psychophysiological insomnia and had a complaint of problems with sleep initiation. All good sleepers reported no trouble sleeping and no excessive daytime sleepiness. Good and poor sleepers spent two nights at the Brock University Sleep Research Laboratory. The first night was used to screen for sleep disorders; the second night was used to investigate information processing during the sleep-onset period. Both groups underwent a repeated sleep-onsets task during which an auditory oddball paradigm was delivered. Participants signalled detection of a higher pitch target tone with a button press as they fell asleep. In addition, waking alert ERPs were recorded 1 hour before and after sleep on both Nights 1 and 2.As predicted by the neurocognitive model of insomnia, increased CNS activity was found in the poor sleepers; this was reflected by their smaller amplitude P2 component seen during wake of the sleep-onset period. Unlike the P2 component, the Nl, N350, and P300 did not vary between the groups. The smaller P2 seen in our poor sleepers indicates that they have a deficit in the sleep initiation processes. Specifically, poor sleepers do not disengage their attention from the outside environment to the same extent as good sleepers during the sleep-onset period. The lack of findings for the N350 suggest that this sleep component may be intact in those with insomnia and that it is the waking components (i.e., Nl, P2) that may be leading to the deficit in sleep initiation. Further, it may be that the mechanism responsible for the disruption of sleep initiation in the poor sleepers is most reflected by the P2 component. Future research investigating ERPs in insomnia should focus on the identification of the components most sensitive to sleep disruption. As well, methods should be developed in order to more clearly identify the various types of insomnia populations in research contexts (e.g., psychophysiological vs. sleep-state misperception) and the various individual (personality characteristics, motivation) and environmental factors (arousal-related variables) that influence particular ERP components. Insomnia has serious consequences for health, safety, and daytime functioning, thus research efforts should continue in order to help alleviate this highly prevalent condition.
Resumo:
years 8 months) and 24 older (M == 7 years 4 months) children. A Monitoring Process Model (MPM) was developed and tested in order to ascertain at which component process ofthe MPM age differences would emerge. The MPM had four components: (1) assessment; (2) evaluation; (3) planning; and (4) behavioural control. The MPM was assessed directly using a referential communication task in which the children were asked to make a series of five Lego buildings (a baseline condition and one building for each MPM component). Children listened to instructions from one experimenter while a second experimenter in the room (a confederate) intetjected varying levels ofverbal feedback in order to assist the children and control the component ofthe MPM. This design allowed us to determine at which "stage" ofprocessing children would most likely have difficulty monitoring themselves in this social-cognitive task. Developmental differences were obselVed for the evaluation, planning and behavioural control components suggesting that older children were able to be more successful with the more explicit metacomponents. Interestingly, however, there was no age difference in terms ofLego task success in the baseline condition suggesting that without the intelVention ofthe confederate younger children monitored the task about as well as older children. This pattern ofresults indicates that the younger children were disrupted by the feedback rather than helped. On the other hand, the older children were able to incorporate the feedback offered by the confederate into a plan ofaction. Another aim ofthis study was to assess similar processing components to those investigated by the MPM Lego task in a more naturalistic observation. Together the use ofthe Lego Task ( a social cognitive task) and the naturalistic social interaction allowed for the appraisal of cross-domain continuities and discontinuities in monitoring behaviours. In this vein, analyses were undertaken in order to ascertain whether or not successful performance in the MPM Lego Task would predict cross-domain competence in the more naturalistic social interchange. Indeed, success in the two latter components ofthe MPM (planning and behavioural control) was related to overall competence in the naturalistic task. However, this cross-domain prediction was not evident for all levels ofthe naturalistic interchange suggesting that the nature ofthe feedback a child receives is an important determinant ofresponse competency. Individual difference measures reflecting the children's general cognitive capacity (Working Memory and Digit Span) and verbal ability (vocabulary) were also taken in an effort to account for more variance in the prediction oftask success. However, these individual difference measures did not serve to enhance the prediction oftask performance in either the Lego Task or the naturalistic task. Similarly, parental responses to questionnaires pertaining to their child's temperament and social experience also failed to increase prediction oftask performance. On-line measures ofthe children's engagement, positive affect and anxiety also failed to predict competence ratings.
Resumo:
This study investigated the effectiveness of an Ontario-developed online Special Education teacher training course as a model for in-service teacher professional development in China. The study employed a mixed method approach encompassing both a quantitative survey and a qualitative research component to gather perceptions of Chinese and Canadian teachers, educational administrators, and teacher-educators who have intensive experience with online education, Special Education, and teacher preparation programs both in China and Canada. The study revealed insufficient understanding of Special Education among the general Chinese population, underdevelopment of Special Education teacher preparation in China, and potential benefits of using a Canadian online teacher training course as a model for Special Education in China. Based on the literature review and the results of this study, it is concluded that online Canadian Special Education teacher in-service courses can set an example for Chinese Special Education teacher training. A caveat is that such courses would require localized modifications, support of educational authorities, and pilot testing.
Resumo:
We propose finite sample tests and confidence sets for models with unobserved and generated regressors as well as various models estimated by instrumental variables methods. The validity of the procedures is unaffected by the presence of identification problems or \"weak instruments\", so no detection of such problems is required. We study two distinct approaches for various models considered by Pagan (1984). The first one is an instrument substitution method which generalizes an approach proposed by Anderson and Rubin (1949) and Fuller (1987) for different (although related) problems, while the second one is based on splitting the sample. The instrument substitution method uses the instruments directly, instead of generated regressors, in order to test hypotheses about the \"structural parameters\" of interest and build confidence sets. The second approach relies on \"generated regressors\", which allows a gain in degrees of freedom, and a sample split technique. For inference about general possibly nonlinear transformations of model parameters, projection techniques are proposed. A distributional theory is obtained under the assumptions of Gaussian errors and strictly exogenous regressors. We show that the various tests and confidence sets proposed are (locally) \"asymptotically valid\" under much weaker assumptions. The properties of the tests proposed are examined in simulation experiments. In general, they outperform the usual asymptotic inference methods in terms of both reliability and power. Finally, the techniques suggested are applied to a model of Tobin’s q and to a model of academic performance.