18 resultados para mixed-signal design
em CentAUR: Central Archive University of Reading - UK
Resumo:
Aims: The aim was to examine whether specific skills required for cognitive behavioural therapy (CBT) could be taught using a computerised training paradigm with people who have intellectual disabilities (IDs). Training aimed to improve: a) ability to link pairs of situations and mediating beliefs to emotions, and b) ability to link pairs of situations and emotions to mediating beliefs. Method: Using a single-blind mixed experimental design, sixty-five participants with IDs were randomised to receive either computerised training or an attention-control condition. Cognitive mediation skills were assessed before and after training. Results: Participants who received training were significantly better at selecting appropriate emotions within situation beliefs pairs, controlling for baseline scores and IQ. Despite significant improvements in the ability of those who received training to correctly select intermediating beliefs for situation-feelings pairings, no between-group differences were observed at post-test. Conclusions: The findings indicated that computerised training led to a significant improvement in some aspects of cognitive mediation for people with IDs, but whether this has a positive effect upon outcome from therapy is yet to be established. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
Aims Training has been shown to improve the ability of people with intellectual disabilities (IDs) to perform some cognitive behavioural therapy (CBT) tasks. This study used a computerised training paradigm with the aim of improving the ability of people with IDs to: a) discriminate between behaviours, thoughts and feelings, and b) link situations, thoughts and feelings. Methods Fifty-five people with mild-to-moderate IDs were randomly assigned to a training or attention-control condition in a single-blind mixed experimental design. Computerised tasks assessed the participants’ skills in: (a) discriminating between behaviours, thoughts and feelings (separately and pooled together), and (b) cognitive mediation by selecting appropriate emotions as consequences to given thoughts, and appropriate thoughts as mediators of given emotions. Results Training significantly improved ability to discriminate between behaviours, thoughts and feelings pooled together, compared to the attention-control condition, even when controlling for baseline scores and IQ. Large within-group improvements in the ability to identify behaviours and feelings were observed for the training condition, but not the attention-control group. There were no significant between-group differences in ability to identify thoughts, or on cognitive mediation skills. Conclusions A single session of computerised training can improve the ability of people with IDs to understand and practise CBT tasks relating to behaviours and feelings. There is potential for computerised training to be used as a “primer” for CBT with people with IDs to improve engagement and outcomes, but further development on a specific computerised cognitive mediation task is needed.
Resumo:
Information technologies are used across all stages of the construction process, and are crucial in the delivery of large projects. Drawing on detailed research on a construction megaproject, we take a practice-based approach to examining the practical and theoretical tensions between existing ways of working and the introduction of new coordination tools in this paper. We analyze the new hybrid practices that emerge, using insights from actor-network theory to articulate the delegation of actions to material and digital objects within ecologies of practice. The three vignettes that we discuss highlight this delegation of actions, the “plugging” and “patching” of ecologies occurring across media and the continual iterations of working practices between different types of media. By shifting the focus from tools to these wider ecologies of practice, the approach has important managerial mplications for the stabilization of new technologies and practices and for managing technological change on large construction projects. We conclude with a discussion of new directions for research, oriented to further elaborating on the importance of the material in understanding change.
Resumo:
The formulation of a new process-based crop model, the general large-area model (GLAM) for annual crops is presented. The model has been designed to operate on spatial scales commensurate with those of global and regional climate models. It aims to simulate the impact of climate on crop yield. Procedures for model parameter determination and optimisation are described, and demonstrated for the prediction of groundnut (i.e. peanut; Arachis hypogaea L.) yields across India for the period 1966-1989. Optimal parameters (e.g. extinction coefficient, transpiration efficiency, rate of change of harvest index) were stable over space and time, provided the estimate of the yield technology trend was based on the full 24-year period. The model has two location-specific parameters, the planting date, and the yield gap parameter. The latter varies spatially and is determined by calibration. The optimal value varies slightly when different input data are used. The model was tested using a historical data set on a 2.5degrees x 2.5degrees grid to simulate yields. Three sites are examined in detail-grid cells from Gujarat in the west, Andhra Pradesh towards the south, and Uttar Pradesh in the north. Agreement between observed and modelled yield was variable, with correlation coefficients of 0.74, 0.42 and 0, respectively. Skill was highest where the climate signal was greatest, and correlations were comparable to or greater than correlations with seasonal mean rainfall. Yields from all 35 cells were aggregated to simulate all-India yield. The correlation coefficient between observed and simulated yields was 0.76, and the root mean square error was 8.4% of the mean yield. The model can be easily extended to any annual crop for the investigation of the impacts of climate variability (or change) on crop yield over large areas. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Presented herein is an experimental design that allows the effects of several radiative forcing factors on climate to be estimated as precisely as possible from a limited suite of atmosphere-only general circulation model (GCM) integrations. The forcings include the combined effect of observed changes in sea surface temperatures, sea ice extent, stratospheric (volcanic) aerosols, and solar output, plus the individual effects of several anthropogenic forcings. A single linear statistical model is used to estimate the forcing effects, each of which is represented by its global mean radiative forcing. The strong colinearity in time between the various anthropogenic forcings provides a technical problem that is overcome through the design of the experiment. This design uses every combination of anthropogenic forcing rather than having a few highly replicated ensembles, which is more commonly used in climate studies. Not only is this design highly efficient for a given number of integrations, but it also allows the estimation of (nonadditive) interactions between pairs of anthropogenic forcings. The simulated land surface air temperature changes since 1871 have been analyzed. The changes in natural and oceanic forcing, which itself contains some forcing from anthropogenic and natural influences, have the most influence. For the global mean, increasing greenhouse gases and the indirect aerosol effect had the largest anthropogenic effects. It was also found that an interaction between these two anthropogenic effects in the atmosphere-only GCM exists. This interaction is similar in magnitude to the individual effects of changing tropospheric and stratospheric ozone concentrations or to the direct (sulfate) aerosol effect. Various diagnostics are used to evaluate the fit of the statistical model. For the global mean, this shows that the land temperature response is proportional to the global mean radiative forcing, reinforcing the use of radiative forcing as a measure of climate change. The diagnostic tests also show that the linear model was suitable for analyses of land surface air temperature at each GCM grid point. Therefore, the linear model provides precise estimates of the space time signals for all forcing factors under consideration. For simulated 50-hPa temperatures, results show that tropospheric ozone increases have contributed to stratospheric cooling over the twentieth century almost as much as changes in well-mixed greenhouse gases.
Resumo:
The aim of this paper is to study the impact of channel state information on the design of cooperative transmission protocols. This is motivated by the fact that the performance gain achieved by cooperative diversity comes at the price of the extra bandwidth resource consumption. Several opportunistic relaying strategies are developed to fully utilize the different types of a priori channel information. The information-theoretic measures such as outage probability and diversity-multiplexing tradeoff are developed for the proposed protocols. The analytical and numerical results demonstrate that the use of such a priori information increases the spectral efficiency of cooperative diversity, especially at low signal-to-noise ratio.
Resumo:
Asynchronous Optical Sampling has the potential to improve signal to noise ratio in THz transient sperctrometry. The design of an inexpensive control scheme for synchronising two femtosecond pulse frequency comb generators at an offset frequency of 20 kHz is discussed. The suitability of a range of signal processing schemes adopted from the Systems Identification and Control Theory community for further processing recorded THz transients in the time and frequency domain are outlined. Finally, possibilities for femtosecond pulse shaping using genetic algorithms are mentioned.
Resumo:
This paper is concerned with the uniformization of a system of afine recurrence equations. This transformation is used in the design (or compilation) of highly parallel embedded systems (VLSI systolic arrays, signal processing filters, etc.). In this paper, we present and implement an automatic system to achieve uniformization of systems of afine recurrence equations. We unify the results from many earlier papers, develop some theoretical extensions, and then propose effective uniformization algorithms. Our results can be used in any high level synthesis tool based on polyhedral representation of nested loop computations.
Resumo:
The HIRDLS instrument contains 21 spectral channels spanning a wavelength range from 6 to 18mm. For each of these channels the spectral bandwidth and position are isolated by an interference bandpass filter at 301K placed at an intermediate focal plane of the instrument. A second filter cooled to 65K positioned at the same wavelength but designed with a wider bandwidth is placed directly in front of each cooled detector element to reduce stray radiation from internally reflected in-band signals, and to improve the out-of-band blocking. This paper describes the process of determining the spectral requirements for the two bandpass filters and the antireflection coatings used on the lenses and dewar window of the instrument. This process uses a system throughput performance approach taking the instrument spectral specification as a target. It takes into account the spectral characteristics of the transmissive optical materials, the relative spectral response of the detectors, thermal emission from the instrument, and the predicted atmospheric signal to determine the radiance profile for each channel. Using this design approach an optimal design for the filters can be achieved, minimising the number of layers to improve the in-band transmission and to aid manufacture. The use of this design method also permits the instrument spectral performance to be verified using the measured response from manufactured components. The spectral calculations for an example channel are discussed, together with the spreadsheet calculation method. All the contributions made by the spectrally active components to the resulting instrument channel throughput are identified and presented.
Resumo:
Sampling strategies for monitoring the status and trends in wildlife populations are often determined before the first survey is undertaken. However, there may be little information about the distribution of the population and so the sample design may be inefficient. Through time, as data are collected, more information about the distribution of animals in the survey region is obtained but it can be difficult to incorporate this information in the survey design. This paper introduces a framework for monitoring motile wildlife populations within which the design of future surveys can be adapted using data from past surveys whilst ensuring consistency in design-based estimates of status and trends through time. In each survey, part of the sample is selected from the previous survey sample using simple random sampling. The rest is selected with inclusion probability proportional to predicted abundance. Abundance is predicted using a model constructed from previous survey data and covariates for the whole survey region. Unbiased design-based estimators of status and trends and their variances are derived from two-phase sampling theory. Simulations over the short and long-term indicate that in general more precise estimates of status and trends are obtained using this mixed strategy than a strategy in which all of the sample is retained or all selected with probability proportional to predicted abundance. Furthermore the mixed strategy is robust to poor predictions of abundance. Estimates of status are more precise than those obtained from a rotating panel design.
Resumo:
Purpose: This paper aims to design an evaluation method that enables an organization to assess its current IT landscape and provide readiness assessment prior to Software as a Service (SaaS) adoption. Design/methodology/approach: The research employs a mixed of quantitative and qualitative approaches for conducting an IT application assessment. Quantitative data such as end user’s feedback on the IT applications contribute to the technical impact on efficiency and productivity. Qualitative data such as business domain, business services and IT application cost drivers are used to determine the business value of the IT applications in an organization. Findings: The assessment of IT applications leads to decisions on suitability of each IT application that can be migrated to cloud environment. Research limitations/implications: The evaluation of how a particular IT application impacts on a business service is done based on the logical interpretation. Data mining method is suggested in order to derive the patterns of the IT application capabilities. Practical implications: This method has been applied in a local council in UK. This helps the council to decide the future status of the IT applications for cost saving purpose.
Resumo:
In vitro studies found that inclusion of dried stinging nettle (Urtica dioica) at 100 mg/g dry matter (DM) increased the pH of a rumen fluid inoculated fermentation buffer by 30% and the effect was persistent for 7 days. Our objective was to evaluate the effects of adding stinging nettle haylage to a total mixed ration on feed intake, eating and rumination activity, rumen pH, milk yield, and milk composition of lactating dairy cows. Six lactating Holstein-Friesian cows were used in a replicated 3 × 3 Latin Square design experiment with 3 treatments and 3 week periods. Treatments were a control (C) high-starch (311 g/kg DM) total mixed ration diet and two treatment diets containing 50 (N5) and 100 (N10) g nettle haylage (DM/kg) as a replacement for ryegrass silage (Lolium perenne). There was an increase (linear, P < 0.010) in the proportion of large particles and a reduction in medium (linear, P = 0.045) and fine particles (linear, P = 0.026) in the diet offered with increasing nettle inclusion. A numerical decrease (linear, P = 0.106) in DM intake (DMI) was observed as nettle inclusion in the diet increased. Milk yield averaged 20.3 kg/day and was not affected by diet. There was a decrease (quadratic, P = 0.01) in the time animals spent ruminating as nettle inclusion in the diet increased, in spite of an increase in the number of boli produced daily for the N5 diet (quadratic, P = 0.031). Animals fed the N10 diet spent less time with a rumen pH below 5.5 (P < 0.05) than cows fed the N5 diet. Averaged over an 8.5 h sampling period, there were no changes in the concentration or proportions of acetate or propionate in the rumen, but feeding nettle haylage reduced the concentrations of n-butyrate (quadratic, P < 0.001), i-butyrate (linear, P < 0.009) and n-caproate (linear, P < 0.003). Milk and fat and protein corrected milk yield were not affected when nettles replaced ryegrass silage in the diet of lactating dairy cows, despite a numerical reduction in feed intake. Rumination activity was reduced by the addition of nettle haylage to the diet, which may reflect differences in fibre structure between the nettle haylage and ryegrass silage fed. Changes observed in rumen pH suggest potential benefits of feeding nettle haylage for reducing rumen acidosis. However, the extent to which these effects were due to the fermentability and structure of the nettle haylage compared to the ryegrass silage fed, or a bioactive component of the nettles, is not certain
Resumo:
In order to examine metacognitive accuracy (i.e., the relationship between metacognitive judgment and memory performance), researchers often rely on by-participant analysis, where metacognitive accuracy (e.g., resolution, as measured by the gamma coefficient or signal detection measures) is computed for each participant and the computed values are entered into group-level statistical tests such as the t-test. In the current work, we argue that the by-participant analysis, regardless of the accuracy measurements used, would produce a substantial inflation of Type-1 error rates, when a random item effect is present. A mixed-effects model is proposed as a way to effectively address the issue, and our simulation studies examining Type-1 error rates indeed showed superior performance of mixed-effects model analysis as compared to the conventional by-participant analysis. We also present real data applications to illustrate further strengths of mixed-effects model analysis. Our findings imply that caution is needed when using the by-participant analysis, and recommend the mixed-effects model analysis.