32 resultados para Instant messaging
Resumo:
One of the major differences undergraduates experience during the transition to university is the style of teaching. In schools and colleges most students study key stage 5 subjects in relatively small informal groups where teacher–pupil interaction is encouraged and two-way feedback occurs through question and answer type delivery. On starting in HE students are amazed by the sizes of the classes. For even a relatively small chemistry department with an intake of 60-70 students, biologists, pharmacists, and other first year undergraduates requiring chemistry can boost numbers in the lecture hall to around 200 or higher. In many universities class sizes of 400 are not unusual for first year groups where efficiency is crucial. Clearly the personalised classroom-style delivery is not practical and it is a brave student who shows his ignorance by venturing to ask a question in front of such an audience. In these environments learning can be a very passive process, the lecture acts as a vehicle for the conveyance of information and our students are expected to reinforce their understanding by ‘self-study’, a term, the meaning of which, many struggle to understand. The use of electronic voting systems (EVS) in such situations can vastly change the students’ learning experience from a passive to a highly interactive process. This principle has already been demonstrated in Physics, most notably in the work of Bates and colleagues at Edinburgh.1 These small hand-held devices, similar to those which have become familiar through programmes such as ‘Who Wants to be a Millionaire’ can be used to provide instant feedback to students and teachers alike. Advances in technology now allow them to be used in a range of more sophisticated settings and comprehensive guides on use have been developed for even the most techno-phobic staff.
Resumo:
In the last few years a state-space formulation has been introduced into self-tuning control. This has not only allowed for a wider choice of possible control actions, but has also provided an insight into the theory underlying—and hidden by—that used in the polynomial description. This paper considers many of the self-tuning algorithms, both state-space and polynomial, presently in use, and by starting from first principles develops the observers which are, effectively, used in each case. At any specific time instant the state estimator can be regarded as taking one of two forms. In the first case the most recently available output measurement is excluded, and here an optimal and conditionally stable observer is obtained. In the second case the present output signal is included, and here it is shown that although the observer is once again conditionally stable, it is no longer optimal. This result is of significance, as many of the popular self-tuning controllers lie in the second, rather than first, category.
Resumo:
A discrete-time algorithm is presented which is based on a predictive control scheme in the form of dynamic matrix control. A set of control inputs are calculated and made available at each time instant, the actual input applied being a weighted summation of the inputs within the set. The algorithm is directly applicable in a self-tuning format and is therefore suitable for slowly time-varying systems in a noisy environment.
Resumo:
The problem of a manipulator operating in a noisy workspace and required to move from an initial fixed position P0 to a final position Pf is considered. However, Pf is corrupted by noise, giving rise to Pˆf, which may be obtained by sensors. The use of learning automata is proposed to tackle this problem. An automaton is placed at each joint of the manipulator which moves according to the action chosen by the automaton (forward, backward, stationary) at each instant. The simultaneous reward or penalty of the automata enables avoiding any inverse kinematics computations that would be necessary if the distance of each joint from the final position had to be calculated. Three variable-structure learning algorithms are used, i.e., the discretized linear reward-penalty (DLR-P, the linear reward-penalty (LR-P ) and a nonlinear scheme. Each algorithm is separately tested with two (forward, backward) and three forward, backward, stationary) actions.
Resumo:
An error polynomial is defined, the coefficients of which indicate the difference at any instant between a system and a model of lower order approximating the system. It is shown how Markov parameters and time series proportionals of the model can be matched with those of the system by setting error polynomial coefficients to zero. Also discussed is the way in which the error between system and model can be considered as being a filtered form of an error input function specified by means of model parameter selection.
Resumo:
Starch-based thickening agents may be prescribed for patients with dysphagia. Thickened fluids alter variables of the swallow reflex, allowing more time for bolus manipulation without compromising airway closure. This investigation explored the variation in viscosity and physical characteristics of thickened drinks prepared in different media under laboratory conditions and compared the results with those of thickened drinks presented to dysphagic patients in one hospital. The rheological characteristics were tested on a simple plastometer and a Bohlin CVOR rheometer (Malvern Instruments, Worcestershire, UK). Samples prepared to “syrup” consistency both in the laboratory and in the hospitalwere significantly different from each other (P < 0.0001). This was also the case for samples prepared to “custard” consistency. Differences existed not only in viscosity, but drinks prepared in different media produced different rheological matrices. This signifies different viscoelastic behaviors that may effect manipulation in the mouth. From this study, preparation of thickened drinks using starch-based instant thickening powders appears to be a highly variable practice.
Resumo:
Firms are faced with a wider set of choices when they identify a need for new office space. They can build or purchase accommodation, lease space for long or short periods with or without the inclusion of services, or they can use “instant office” solutions provided by serviced office operators. But how do they evaluate these alternatives and are they able to make rational choices? The research found that the shortening of business horizons lead to the desire for more office space on short-term contracts often with the inclusion of at least some facilities management and business support services. The need for greater flexibility, particularly in financial terms, was highlighted as an important criteria when selecting new office accommodation. The current office portfolios held were perceived not to meet these requirements. However, there was often a lack of good quality data available within occupiers which could be used to help them analyse the range of choices in the market. Additionally, there were other organisational constraints to making decisions about inclusive real estate products. These included fragmentation of decisions-making, internal politics and the lack of assessment of business risk alongside real estate risk. Overall therefore, corporate occupiers themselves act as an interial force to the development of new and innovative real estate products.
Resumo:
In situ high resolution aircraft measurements of cloud microphysical properties were made in coordination with ground based remote sensing observations of a line of small cumulus clouds, using Radar and Lidar, as part of the Aerosol Properties, PRocesses And InfluenceS on the Earth's climate (APPRAISE) project. A narrow but extensive line (~100 km long) of shallow convective clouds over the southern UK was studied. Cloud top temperatures were observed to be higher than −8 °C, but the clouds were seen to consist of supercooled droplets and varying concentrations of ice particles. No ice particles were observed to be falling into the cloud tops from above. Current parameterisations of ice nuclei (IN) numbers predict too few particles will be active as ice nuclei to account for ice particle concentrations at the observed, near cloud top, temperatures (−7.5 °C). The role of mineral dust particles, consistent with concentrations observed near the surface, acting as high temperature IN is considered important in this case. It was found that very high concentrations of ice particles (up to 100 L−1) could be produced by secondary ice particle production providing the observed small amount of primary ice (about 0.01 L−1) was present to initiate it. This emphasises the need to understand primary ice formation in slightly supercooled clouds. It is shown using simple calculations that the Hallett-Mossop process (HM) is the likely source of the secondary ice. Model simulations of the case study were performed with the Aerosol Cloud and Precipitation Interactions Model (ACPIM). These parcel model investigations confirmed the HM process to be a very important mechanism for producing the observed high ice concentrations. A key step in generating the high concentrations was the process of collision and coalescence of rain drops, which once formed fell rapidly through the cloud, collecting ice particles which caused them to freeze and form instant large riming particles. The broadening of the droplet size-distribution by collision-coalescence was, therefore, a vital step in this process as this was required to generate the large number of ice crystals observed in the time available. Simulations were also performed with the WRF (Weather, Research and Forecasting) model. The results showed that while HM does act to increase the mass and number concentration of ice particles in these model simulations it was not found to be critical for the formation of precipitation. However, the WRF simulations produced a cloud top that was too cold and this, combined with the assumption of continual replenishing of ice nuclei removed by ice crystal formation, resulted in too many ice crystals forming by primary nucleation compared to the observations and parcel modelling.
Resumo:
Chlorogenic acids (CGA) are a class of polyphenols noted for their health benefits. These compounds were identified and quantified, using LC–MS and HPLC, in commercially available coffees which varied in pro- cessing conditions. Analysis of ground and instant coffees indicated the presence of caffeoylquinic acids (CQA), feruloylquinic acids (FQA) and dicaffeoylquinic acids (diCQA) in all 18 samples tested. 5-CQA was present at the highest levels, between 25 and 30% of total CGA; subsequent relative quantities were: 4- CQA > 3-CQA > 5-FQA > 4-FQA > diCQA (sum of 3,4, 3,5 and 4,5-diCQA). CGA content varied greatly (27.33–121.25 mg/200 ml coffee brew), driven primarily by the degree of coffee bean roasting (a high amount of roasting had a detrimental effect on CGA content). These results highlight the broad range of CGA quantity in commercial coffee and demonstrate that coffee choice is important in delivering opti-mum CGA intake to consumers.
Resumo:
The open magnetosphere model of cusp ion injection, acceleration and precipitation is used to predict the dispersion characteristics for fully pulsed magnetic reconnection at a low-latitude magnetopause X-line. The resulting steps, as would be seen by a satellite moving meridionally and normal to the ionospheric projection of the X-line, are compared with those seen by satellites moving longitudinally, along the open/closed boundary. It is shown that two observed cases can be explained by similar magnetosheath and reconnection characteristics, and that the major differences between them are well explained by the different satellite paths through the events. Both cases were observed in association with poleward-moving transient events seen by ground-based radar, as also predicted by the theory. The results show that the reconnection is pulsed but strongly imply it cannot also be spatially patchy, in the sense of isolated X-lines which independently are intermittently active. Furthermore they show that the reconnection pulses responsible for the poleward-moving events and the cusp ion steps, must cover at least 3 h of magnetic local time, although propagation of the active reconnection region may mean that it does not extend this far at any one instant of time.
Resumo:
The article looks at the role of consumers' social identities in their purchasing decisions, and hence in the creation of effective marketing strategies. It says that people generally belong to multiple social groups, any one of which may have the most salience for them in a given situation. It reports on social psychology research on how a person's connection with a particular social identity can be triggered and discusses the idea in the context of marketing products including the Toyota Prius hybrid-electric automobile, Nescafé instant coffee, and the Jeep all-terrain vehicle. INSET: Lessons of the Stanford Prison Experiment.
Resumo:
Background 29 autoimmune diseases, including Rheumatoid Arthritis, gout, Crohn’s Disease, and Systematic Lupus Erythematosus affect 7.6-9.4% of the population. While effective therapy is available, many patients do not follow treatment or use medications as directed. Digital health and Web 2.0 interventions have demonstrated much promise in increasing medication and treatment adherence, but to date many Internet tools have proven disappointing. In fact, most digital interventions continue to suffer from high attrition in patient populations, are burdensome for healthcare professionals, and have relatively short life spans. Objective Digital health tools have traditionally centered on the transformation of existing interventions (such as diaries, trackers, stage-based or cognitive behavioral therapy programs, coupons, or symptom checklists) to electronic format. Advanced digital interventions have also incorporated attributes of Web 2.0 such as social networking, text messaging, and the use of video. Despite these efforts, there has not been little measurable impact in non-adherence for illnesses that require medical interventions, and research must look to other strategies or development methodologies. As a first step in investigating the feasibility of developing such a tool, the objective of the current study is to systematically rate factors of non-adherence that have been reported in past research studies. Methods Grounded Theory, recognized as a rigorous method that facilitates the emergence of new themes through systematic analysis, data collection and coding, was used to analyze quantitative, qualitative and mixed method studies addressing the following autoimmune diseases: Rheumatoid Arthritis, gout, Crohn’s Disease, Systematic Lupus Erythematosus, and inflammatory bowel disease. Studies were only included if they contained primary data addressing the relationship with non-adherence. Results Out of the 27 studies, four non-modifiable and 11 modifiable risk factors were discovered. Over one third of articles identified the following risk factors as common contributors to medication non-adherence (percent of studies reporting): patients not understanding treatment (44%), side effects (41%), age (37%), dose regimen (33%), and perceived medication ineffectiveness (33%). An unanticipated finding that emerged was the need for risk stratification tools (81%) with patient-centric approaches (67%). Conclusions This study systematically identifies and categorizes medication non-adherence risk factors in select autoimmune diseases. Findings indicate that patients understanding of their disease and the role of medication are paramount. An unexpected finding was that the majority of research articles called for the creation of tailored, patient-centric interventions that dispel personal misconceptions about disease, pharmacotherapy, and how the body responds to treatment. To our knowledge, these interventions do not yet exist in digital format. Rather than adopting a systems level approach, digital health programs should focus on cohorts with heterogeneous needs, and develop tailored interventions based on individual non-adherence patterns.
Resumo:
An evidence-led scientific case for development of a space-based polar remote sensing platform at geostationary-like (GEO-like) altitudes is developed through methods including a data user survey. Whilst a GEO platform provides a nearstatic perspective, multiple platforms are required to provide circumferential coverage. Systems for achieving GEO-like polar observation likewise require multiple platforms however the perspective is non-stationery. A key choice is between designs that provide complete polar view from a single platform at any given instant, and designs where this is obtained by compositing partial views from multiple sensors. Users foresee an increased challenge in extracting geophysical information from composite images and consider the use of non-composited images advantageous. Users also find the placement of apogee over the pole to be preferable to the alternative scenarios. Thus, a clear majority of data users find the “Taranis” orbit concept to be better than a critical inclination orbit, due to the improved perspective offered. The geophysical products that would benefit from a GEO-like polar platform are mainly estimated from radiances in the visible/near infrared and thermal parts of the electromagnetic spectrum, which is consistent with currently proven technologies from GEO. Based on the survey results, needs analysis, and current technology proven from GEO, scientific and observation requirements are developed along with two instrument concepts with eight and four channels, based on Flexible Combined Imager heritage. It is found that an operational system could, mostly likely, be deployed from an Ariane 5 ES to a 16-hour orbit, while a proof-of-concept system could be deployed from a Soyuz launch to the same orbit.
Resumo:
Since the first reported case of HIV infection in Hong Kong in 1985, only two HIV-positive individuals in the territory have voluntarily made public their seropositivity: a British dentist named Mike Sinclair, who disclosed his condition to the media in 1992 and died in 1995, and J.J. Chan, a local Chinese disc-jockey, who came forward in 1995 and died just a few months later. When they made their revelations, both became instant media personalities and were invited by the Hong Kong Government to act as spokespeople for AIDS awareness and prevention. Mike Sinclair worked as an education officer for the Hong Kong AIDS Foundation, and J.J. Chan appeared in Government television commercials about AIDS. This article explores how the public identities of these two figures were constructed in the cultural context of Hong Kong where both Eastern and Western values exist side by side and interact. It argues that the construction of `AIDS celebrities' is a kind of `identity project' negotiated among the players involved: the media, the Government, the public, and the person with AIDS (PWA) himself, each bringing to the construction their own `theories' regarding the self and communication. When the players in the construction hold shared assumptions about the nature of the self and the role of communication in enacting it, harmonious discourses arise, but when cultural models among the players differ, contradictory or ambiguous constructions result. The effect of culture on the way `AIDS celebrities' are constructed has implications for the way societies view the issue of AIDS and treat those who have it. It also helps reveal possible sites of difficulty when individuals of different cultures communicate about the issue.