59 resultados para Pre-Mesozoic basement of Iberia
Resumo:
The comments of Charles Kegan Paul, the Victorian publisher who was involved in publishing the novels of the nineteenth-century British-Indian author Philip Meadows Taylor as single volume reprints in the 1880s, are illuminating. They are indicative of the publisher's position with regard to publishing - that there was often no correlation between commercial success and the artistic merit of a work. According to Kegan Paul, a substandard or mediocre text would be commercially successful as long it met a perceived want on the part of the public. In effect, the ruminations of the publisher suggests that a firm desirous of acquiring commercial success for a work should be an astute judge of the pre-existing wants of consumers within the market. Yet Theodor Adorno, writing in the mid-twentieth century, offers an entirely distinctive perspective to Kegan Paul's observations, arguing that there is nothing foreordained about consumer demand for certain cultural tropes or productions. They in fact are driven by an industry that preempts and conditions the possible reactions of the consumer. Both Kegan Paul's and Adorno's insights are illuminating when it comes to addressing the key issues explored in this essay. Kegan Paul's comments allude to the ways in which the publisher's promotion of Philip Meadows Taylor's fictional depictions of India and its peoples were to a large extent driven in the mid- to late-nineteenth century by their expectations of what metropolitan readers desired at any given time, whereas Adorno's insights reveal the ways in which British-Indian narratives and the public identity of their authors were not assured in advance, but were, to a large extent, engineered by the publishing industry and the literary marketplace.
Resumo:
Karaoke singing is a popular form of entertainment in several parts of the world. Since this genre of performance attracts amateurs, the singing often has artifacts related to scale, tempo, and synchrony. We have developed an approach to correct these artifacts using cross-modal multimedia streams information. We first perform adaptive sampling on the user's rendition and then use the original singer's rendition as well as the video caption highlighting information in order to correct the pitch, tempo and the loudness. A method of analogies has been employed to perform this correction. The basic idea is to manipulate the user's rendition in a manner to make it as similar as possible to the original singing. A pre-processing step of noise removal due to feedback and huffing also helps improve the quality of the user's audio. The results are described in the paper which shows the effectiveness of this multimedia approach.
Resumo:
Disability-related public policy currently emphasises reducing the number of people experiencing exclusion from the spaces of the social and economic majority as being the pre-eminent indicator of inclusion. Twenty-eight adult, New Zealand vocational service users collaborated in a participatory action research project to develop shared understandings of community participation. Analysis of their narratives suggests that spatial indices of inclusion are quiet in potentially oppressive ways about the ways mainstream settings can be experienced by people with disabilities and quiet too about the alternative, less well sanctioned communities to which people with disabilities have always belonged. Participants identified five key attributes of place as important qualitative antecedents to a sense of community belonging. The potential of these attributes and other self-authored approaches to inclusion are explored as ways that people with disabilities can support the policy objective of effecting a transformation from disabling to inclusive communities.
Resumo:
Background There has been a significant reduction in the number of people with severe mental illness who spend extended periods in long-stay hospitals. District health authorities, local authorities, housing associations and voluntary organisations are jointly expected to provide support for people with severe mental disorder/s. This 'support' may well involve some kind of special housing. Objectives To determine the effects of supported housing schemes compared with outreach support schemes or 'standard care' for people with severe mental disorder/s living in the community. Search methods For the 2006 update we searched the Cochrane Schizophrenia Group Trials Register (April 2006) and the Cochrane Central Register of Controlled Trials (CENTRAL, 2006 Issue 2). Selection criteria We included all relevant randomised, or quasi-randomised, trials dealing with people with 'severe mental disorder/s' allocated to supported housing, compared with outreach support schemes or standard care. We focused on outcomes of service utilisation, mental state, satisfaction with care, social functioning, quality of life and economic data. Data collection and analysis We reliably selected studies, quality rated them and undertook data extraction. For dichotomous data, we would have estimated relative risks (RR), with the 95% confidence intervals (CI). Where possible, we would have calculated the number needed to treat statistic (NNT). We would have carried out analysis by intention-to-treat and would have summated normal continuous data using the weighted mean difference (WMD). We would have presented scale data for only those tools that had attained pre-specified levels of quality and undertaken tests for heterogeneity and publication bias. Main results Although 139 citations were acquired from the searches, no study met the inclusion criteria. Authors' conclusions Dedicated schemes whereby people with severe mental illness are located within one site or building with assistance from professional workers have potential for great benefit as they provide a 'safe haven' for people in need of stability and support. This, however, may be at the risk of increasing dependence on professionals and prolonging exclusion from the community. Whether or not the benefits outweigh the risks can only be a matter of opinion in the absence of reliable evidence. There is an urgent need to investigate the effects of supported housing on people with severe mental illness within a randomised trial.
Resumo:
A combined geomorphological–physical model approach is used to generate three-dimensional reconstructions of glaciers in Pacific Far NE Russia during the global Last glacial Maximum (gLGM). The horizontal dimensions of these ice masses are delineated by moraines, their surface elevations are estimated using an iterative flowline model and temporal constraints upon their margins are derived from published age estimates. The equilibrium line altitudes (ELAs) of these ice masses are estimated, and gLGM climate is reconstructed using a simple degree–day melt model. The results indicate that, during the gLGM, ice masses occupying the Pekulney, Kankaren and Sredinny mountains of Pacific Far NE Russia were of valley glacier and ice field type. These glaciers were
between 7 and 80 km in length, and were considerably less extensive than during pre-LGM phases of advance. gLGM ice masses in these regions had ELAs of between 575± 22m and 1035±41m (above sea level) – corresponding to an ELA depression of 350–740 m, relative to present. Data indicate that, in the Pekulney Mountains, this ELA depression occurred because of a 6.48°C reduction
in mean July temperature, and 200mm a¯¹ reduction in precipitation, relative to present. Thus reconstructions support a restricted view of gLGM glaciation in Pacific Far NE Russia and indicate that the region’s aridity precluded the development of large continental ice sheets.
Resumo:
This paper presents single-chip FPGA Rijndael algorithm implementations of the Advanced Encryption Standard (AES) algorithm, Rijndael. In particular, the designs utilise look-up tables to implement the entire Rijndael Round function. A comparison is provided between these designs and similar existing implementations. Hardware implementations of encryption algorithms prove much faster than equivalent software implementations and since there is a need to perform encryption on data in real time, speed is very important. In particular, Field Programmable Gate Arrays (FPGAs) are well suited to encryption implementations due to their flexibility and an architecture, which can be exploited to accommodate typical encryption transformations. In this paper, a Look-Up Table (LUT) methodology is introduced where complex and slow operations are replaced by simple LUTs. A LUT-based fully pipelined Rijndael implementation is described which has a pre-placement performance of 12 Gbits/sec, which is a factor 1.2 times faster than an alternative design in which look-up tables are utilised to implement only one of the Round function transformations, and 6 times faster than other previous single-chip implementations. Iterative Rijndael implementations based on the Look-Up-Table design approach are also discussed and prove faster than typical iterative implementations.
Resumo:
Molecularly Imprinted Polymers (MIPs) against S-ibuprofen were synthesised using a tailor made functional monomer, 2-acrylamido-4-methylpyridine, following extensive pre-polymerisation studies of template-monomer complexation. An apparent association constant of 340 +/- 22 M-1 was calculated that was subsequently corrected to account for dimerisation of ibuprofen (K-dim = 320 +/- 95 M-1) resulting in an intrinsic association constant of 715 +/- 16 M-1, consistent with previously reported values. Using the synthesised imprinted polymer as a stationary phase, complete resolution of a racemic mixture of ibuprofen was achieved in predominantly aqueous mobile phases. An imprinting factor of 10 was observed, and was found to be in agreement with the difference in the average number of binding sites between MIP and blank polymers, calculated by staircase frontal chromatography. The imprinted polymers exhibited enhanced selectivity for the templated drug over structurally related NSAIDs. When applied as sorbents in solid-phase extraction of ibuprofen from commercial tablets, urine and blood serum samples, recoveries up to 92.2% were achieved. © The Royal Society of Chemistry 2012
Resumo:
An evolution in theoretical models and methodological paradigms for investigating cognitive biases in the addictions is discussed. Anomalies in traditional cognitive perspectives, and problems with the self-report methods which underpin them, are highlighted. An emergent body of cognitive research, contextualized within the principles and paradigms of cognitive neuropsychology rather than social learning theory, is presented which, it is argued, addresses these anomalies and problems. Evidence is presented that biases in the processing of addiction-related stimuli, and in the network of propositions which motivate addictive behaviours, occur at automatic, implicit and pre-conscious levels of awareness. It is suggested that methods which assess such implicit cognitive biases (e.g. Stroop, memory, priming and reaction-time paradigms) yield findings which have better predictive utility for ongoing behaviour than those biases determined by self-report methods of introspection. The potential utility of these findings for understanding "loss of control" phenomena, and the desynchrony between reported beliefs and intentions and ongoing addictive behaviours, is discussed. Applications to the practice of cognitive therapy are considered.
Resumo:
In patients with cystic fibrosis (CF), clinical trials are of paramount importance. Here, the current status of drug development in CF is discussed and future directions highlighted. Methods for pre-clinical testing of drugs with potential activity in CF patients including relevant animal models are described. Study design options for phase II and phase III studies involving CF patients are provided, including required patient numbers, safety issues and surrogate end point parameters for drugs, tested for different disease manifestations. Finally, regulatory issues for licensing new therapies for CF patients are discussed, including new directives of the European Union and the structure of a European clinical trial network for clinical studies involving CF patients is proposed.
Resumo:
Tissue microarrays (TMAs) represent a powerful method for undertaking large-scale tissue-based biomarker studies. While TMAs offer several advantages, there are a number of issues specific to their use which need to be considered when employing this method. Given the investment in TMA-based research, guidance on design and execution of experiments will be of benefit and should help researchers new to TMA-based studies to avoid known pitfalls. Furthermore, a consensus on quality standards for TMA-based experiments should improve the robustness and reproducibility of studies, thereby increasing the likelihood of identifying clinically useful biomarkers. In order to address these issues, the National Cancer Research Institute Biomarker and Imaging Clinical Studies Group organized a 1-day TMA workshop held in Nottingham in May 2012. The document herein summarizes the conclusions from the workshop. It includes guidance and considerations on all aspects of TMA-based research, including the pre-analytical stages of experimental design, the analytical stages of data acquisition, and the postanalytical stages of data analysis. A checklist is presented which can be used both for planning a TMA experiment and interpreting the results of such an experiment. For studies of cancer biomarkers, this checklist could be used as a supplement to the REMARK guidelines.
Resumo:
Recently, Bell ( 2004 Mon. Not. R. Astron. Soc. 353 550) has reanalysed the problem of wave excitation by cosmic rays propagating in the pre-cursor region of a supernova remnant shock front. He pointed out a strong, non-resonant, current-driven instability that had been overlooked in the kinetic treatments by Achterberg ( 1983 Astron. Astrophys. 119 274) and McKenzie and Volk ( 1982 Astron. Astrophys. 116 191), and suggested that it is responsible for substantial amplification of the ambient magnetic field. Magnetic field amplification is also an important issue in the problem of the formation and structure of relativistic shock fronts, particularly in relation to models of gamma-ray bursts. We have therefore generalized the linear analysis to apply to this case, assuming a relativistic background plasma and a monoenergetic, unidirectional incoming proton beam. We find essentially the same non-resonant instability observed by Bell and show that also, under GRB conditions, it grows much faster than the resonant waves. We quantify the extent to which thermal effects in the background plasma limit the maximum growth rate.
Resumo:
We present optical and near-infrared photometry and spectroscopy of SN 2009ib, a Type II-P supernova in NGC 1559. This object has moderate brightness, similar to those of the intermediate-luminosity SNe 2008in and 2009N. Its plateau phase is unusually long, lasting for about 130 d after explosion. The spectra are similar to those of the subluminous SN 2002gd, with moderate expansion velocities. We estimate the Ni-56 mass produced as 0.046 +/- A 0.015 M-aS (TM). We determine the distance to SN 2009ib using both the expanding photosphere method (EPM) and the standard candle method. We also apply EPM to SN 1986L, a Type II-P SN that exploded in the same galaxy. Combining the results of different methods, we conclude the distance to NGC 1559 as D = 19.8 +/- A 3.0 Mpc. We examine archival, pre-explosion images of the field taken with the Hubble Space Telescope, and find a faint source at the position of the SN, which has a yellow colour [(V - I)(0) = 0.85 mag]. Assuming it is a single star, we estimate its initial mass as M-ZAMS = 20 M-aS (TM). We also examine the possibility, that instead of the yellow source the progenitor of SN 2009ib is a red supergiant star too faint to be detected. In this case, we estimate the upper limit for the initial zero-age main sequence (ZAMS) mass of the progenitor to be similar to 14-17 M-aS (TM). In addition, we infer the physical properties of the progenitor at the explosion via hydrodynamical modelling of the observables, and estimate the total energy as similar to 0.55 x 10(51) erg, the pre-explosion radius as similar to 400 R-aS (TM), and the ejected envelope mass as similar to 15 M-aS (TM), which implies that the mass of the progenitor before explosion was similar to 16.5-17 M-aS (TM).
Resumo:
Introduction
The use of video capture of lectures in Higher Education is not a recent occurrence with web based learning technologies including digital recording of live lectures becoming increasing commonly offered by universities throughout the world (Holliman and Scanlon, 2004). However in the past decade the increase in technical infrastructural provision including the availability of high speed broadband has increased the potential and use of videoed lecture capture. This had led to a variety of lecture capture formats including pod casting, live streaming or delayed broadcasting of whole or part of lectures.
Additionally in the past five years there has been a significant increase in the popularity of online learning, specifically via Massive Open Online Courses (MOOCs) (Vardi, 2014). One of the key aspects of MOOCs is the simulated recording of lecture like activities. There has been and continues to be much debate on the consequences of the popularity of MOOCs, especially in relation to its potential uses within established University programmes.
There have been a number of studies dedicated to the effects of videoing lectures.
The clustered areas of research in video lecture capture have the following main themes:
• Staff perceptions including attendance, performance of students and staff workload
• Reinforcement versus replacement of lectures
• Improved flexibility of learning
• Facilitating engaging and effective learning experiences
• Student usage, perception and satisfaction
• Facilitating students learning at their own pace
Most of the body of the research has concentrated on student and faculty perceptions, including academic achievement, student attendance and engagement (Johnston et al, 2012).
Generally the research has been positive in review of the benefits of lecture capture for both students and faculty. This perception coupled with technical infrastructure improvements and student demand may well mean that the use of video lecture capture will continue to increase in frequency in the next number of years in tertiary education. However there is a relatively limited amount of research in the effects of lecture capture specifically in the area of computer programming with Watkins 2007 being one of few studies . Video delivery of programming solutions is particularly useful for enabling a lecturer to illustrate the complex decision making processes and iterative nature of the actual code development process (Watkins et al 2007). As such research in this area would appear to be particularly appropriate to help inform debate and future decisions made by policy makers.
Research questions and objectives
The purpose of the research was to investigate how a series of lecture captures (in which the audio of lectures and video of on-screen projected content were recorded) impacted on the delivery and learning of a programme of study in an MSc Software Development course in Queen’s University, Belfast, Northern Ireland. The MSc is conversion programme, intended to take graduates from non-computing primary degrees and upskill them in this area. The research specifically targeted the Java programming module within the course. It also analyses and reports on the empirical data from attendances and various video viewing statistics. In addition, qualitative data was collected from staff and student feedback to help contextualise the quantitative results.
Methodology, Methods and Research Instruments Used
The study was conducted with a cohort of 85 post graduate students taking a compulsory module in Java programming in the first semester of a one year MSc in Software Development. A pre-course survey of students found that 58% preferred to have available videos of “key moments” of lectures rather than whole lectures. A large scale study carried out by Guo concluded that “shorter videos are much more engaging” (Guo 2013). Of concern was the potential for low audience retention for videos of whole lectures.
The lecturers recorded snippets of the lecture directly before or after the actual physical delivery of the lecture, in a quiet environment and then upload the video directly to a closed YouTube channel. These snippets generally concentrated on significant parts of the theory followed by theory related coding demonstration activities and were faithful in replication of the face to face lecture. Generally each lecture was supported by two to three videos of durations ranging from 20 – 30 minutes.
Attendance
The MSc programme has several attendance based modules of which Java Programming was one element. In order to assess the consequence on attendance for the Programming module a control was established. The control used was a Database module which is taken by the same students and runs in the same semester.
Access engagement
The videos were hosted on a closed YouTube channel made available only to the students in the class. The channel had enabled analytics which reported on the following areas for all and for each individual video; views (hits), audience retention, viewing devices / operating systems used and minutes watched.
Student attitudes
Three surveys were taken in regard to investigating student attitudes towards the videoing of lectures. The first was before the start of the programming module, then at the mid-point and subsequently after the programme was complete.
The questions in the first survey were targeted at eliciting student attitudes towards lecture capture before they had experienced it in the programme. The midpoint survey gathered data in relation to how the students were individually using the system up to that point. This included feedback on how many videos an individual had watched, viewing duration, primary reasons for watching and the result on attendance, in addition to probing for comments or suggestions. The final survey on course completion contained questions similar to the midpoint survey but in summative view of the whole video programme.
Conclusions and Outcomes
The study confirmed findings of other such investigations illustrating that there is little or no effect on attendance at lectures. The use of the videos appears to help promote continual learning but they are particularly accessed by students at assessment periods. Students respond positively to the ability to access lectures digitally, as a means of reinforcing learning experiences rather than replacing them. Feedback from students was overwhelmingly positive indicating that the videos benefited their learning. Also there are significant benefits to part recording of lectures rather than recording whole lectures. The behaviour viewing trends analytics suggest that despite the increase in the popularity of online learning via MOOCs and the promotion of video learning on mobile devices in fact in this study the vast majority of students accessed the online videos at home on laptops or desktops However, in part, this is likely due to the nature of the taught subject, that being programming.
The research involved prerecording the lecture in smaller timed units and then uploading for distribution to counteract existing quality issues with recording entire live lectures. However the advancement and consequential improvement in quality of in situ lecture capture equipment may well help negate the need to record elsewhere. The research has also highlighted an area of potentially very significant use for performance analysis and improvement that could have major implications for the quality of teaching. A study of the analytics of the viewings of the videos could well provide a quick response formative feedback mechanism for the lecturer. If a videoed lecture either recorded live or later is a true reflection of the face to face lecture an analysis of the viewing patterns for the video may well reveal trends that correspond with the live delivery.
Resumo:
BACKGROUND: We sought to determine whether corneal biomechanical parameters are predictive of reduction in axial length after anti-metabolite trabeculectomy. METHODS: Chinese subjects undergoing trabeculectomy with mitomycin C by a single experienced surgeon underwent the following measurements: Corneal hysteresis (CH, Ocular Response Analyzer, Reichert Ophthalmic Instruments), Goldmann intra-ocular pressure (IOP), central corneal thickness (CCT) and axial length (AL, IOLMaster, Carl Zeiss Meditec, Dublin, CA) were measured pre-operatively, and AL, CH and IOP were measured 1 day and 1 week post-operatively. RESULTS: Mean age of 31 subjects was 52.0 ± 15.2 years, and 15 (48.4%) were female. The mean pre-operative IOP of 21.4 ± 9.3 mmHg was reduced to 8.2 ± 4.6 mmHg 1 day and 11.0 ± 4.4 mmHg 1 week post-operatively (p < 0.001). AL declined from 22.99 ± 0.90 to 22.76 ± 0.87 mm at 1 day and 22.74 ± 0.9 mm at 1 week; 30/31 (%) eyes had a decline in AL (p < 0.001, sign test). In multivariate linear regression models including post-operative data from 1 day and 1 week, greater decline in Goldmann IOP (p < 0.0001, greater pre-op axial length (p < 0.001) and lower pre-operative CH (p = 0.03), were all associated with greater decline in post-operative axial length. CONCLUSIONS: Eyes with lesser ability of the ocular coat to absorb energy (lower CH) had significantly greater decrease in axial length after trabeculectomy-induced IOP-lowering.