931 resultados para plasmonic platforms
Resumo:
Massive Open Online Courses (MOOCs) are a new addition to the open educational provision. They are offered mainly by prestigious universities on various commercial and non-commercial MOOC platforms allowing anyone who is interested to experience the world class teaching practiced in these universities. MOOCs have attracted wide interest from around the world. However, learner demographics in MOOCs suggest that some demographic groups are underrepresented. At present MOOCs seem to be better serving the continuous professional development sector.
Resumo:
The past years have shown an enormous advancement in sequencing and array-based technologies, producing supplementary or alternative views of the genome stored in various formats and databases. Their sheer volume and different data scope pose a challenge to jointly visualize and integrate diverse data types. We present AmalgamScope a new interactive software tool focusing on assisting scientists with the annotation of the human genome and particularly the integration of the annotation files from multiple data types, using gene identifiers and genomic coordinates. Supported platforms include next-generation sequencing and microarray technologies. The available features of AmalgamScope range from the annotation of diverse data types across the human genome to integration of the data based on the annotational information and visualization of the merged files within chromosomal regions or the whole genome. Additionally, users can define custom transcriptome library files for any species and use the file exchanging distant server options of the tool.
Resumo:
Human brain imaging techniques, such as Magnetic Resonance Imaging (MRI) or Diffusion Tensor Imaging (DTI), have been established as scientific and diagnostic tools and their adoption is growing in popularity. Statistical methods, machine learning and data mining algorithms have successfully been adopted to extract predictive and descriptive models from neuroimage data. However, the knowledge discovery process typically requires also the adoption of pre-processing, post-processing and visualisation techniques in complex data workflows. Currently, a main problem for the integrated preprocessing and mining of MRI data is the lack of comprehensive platforms able to avoid the manual invocation of preprocessing and mining tools, that yields to an error-prone and inefficient process. In this work we present K-Surfer, a novel plug-in of the Konstanz Information Miner (KNIME) workbench, that automatizes the preprocessing of brain images and leverages the mining capabilities of KNIME in an integrated way. K-Surfer supports the importing, filtering, merging and pre-processing of neuroimage data from FreeSurfer, a tool for human brain MRI feature extraction and interpretation. K-Surfer automatizes the steps for importing FreeSurfer data, reducing time costs, eliminating human errors and enabling the design of complex analytics workflow for neuroimage data by leveraging the rich functionalities available in the KNIME workbench.
Resumo:
A parallel formulation for the simulation of a branch prediction algorithm is presented. This parallel formulation identifies independent tasks in the algorithm which can be executed concurrently. The parallel implementation is based on the multithreading model and two parallel programming platforms: pthreads and Cilk++. Improvement in execution performance by up to 7 times is observed for a generic 2-bit predictor in a 12-core multiprocessor system.
Resumo:
Massive Open Online Courses (MOOCs) have become very popular among learners millions of users from around the world registered with leading platforms. There are hundreds of universities (and other organizations) offering MOOCs. However, sustainability of MOOCs is a pressing concern as MOOCs incur up front creation costs, maintenance costs to keep content relevant and on-going support costs to provide facilitation while a course is being run. At present, charging a fee for certification (for example Coursera Signature Track and FutureLearn Statement of Completion) seems a popular business model. In this paper, the authors discuss other possible business models and their pros and cons. Some business models discussed here are: Freemium model – providing content freely but charging for premium services such as course support, tutoring and proctored exams. Sponsorships – courses can be created in collaboration with industry where industry sponsorships are used to cover the costs of course production and offering. For example Teaching Computing course was offered by the University of East Anglia on the FutureLearn platform with the sponsorship from British Telecom while the UK Government sponsored the course Introduction to Cyber Security offered by the Open University on FutureLearn. Initiatives and Grants – The government, EU commission or corporations could commission the creation of courses through grants and initiatives according to the skills gap identified for the economy. For example, the UK Government’s National Cyber Security Programme has supported a course on Cyber Security. Similar initiatives could also provide funding to support relevant course development and offering. Donations – Free software, Wikipedia and early OER initiatives such as the MIT OpenCourseware accept donations from the public and this could well be used as a business model where learners could contribute (if they wish) to the maintenance and facilitation of a course. Merchandise – selling merchandise could also bring revenue to MOOCs. As many participants do not seek formal recognition (European Commission, 2014) for their completion of a MOOC, merchandise that presents their achievement in a playful way could well be attractive for them. Sale of supplementary material –supplementary course material in the form of an online or physical book or similar could be sold with the revenue being reinvested in the course delivery. Selective advertising – courses could have advertisements relevant to learners Data sharing – though a controversial topic, sharing learner data with relevant employers or similar could be another revenue model for MOOCs. Follow on events – the courses could lead to follow on summer schools, courses or other real-life or online events that are paid-for in which case a percentage of the revenue could be passed on to the MOOC for its upkeep. Though these models are all possible ways of generating revenue for MOOCs, some are more controversial and sensitive than others. Nevertheless unless appropriate business models are identified the sustainability of MOOCs would be problematic.
Resumo:
Spatial variability of liquid cloud water content and rainwater content is analysed from three different observational platforms: in situ measurements from research aircraft, land-based remote sensing techniques using radar and lidar, and spaceborne remote sensing from CloudSat. The variance is found to increase with spatial scale, but also depends strongly on the cloud or rain fraction regime, with overcast regions containing less variability than broken cloud fields. This variability is shown to lead to large biases, up to a factor of 4, in both the autoconversion and accretion rates estimated at a model grid scale of ≈40 km by a typical microphysical parametrization using in-cloud mean values. A parametrization for the subgrid variability of liquid cloud and rainwater content is developed, based on the observations, which varies with both the grid scale and cloud or rain fraction, and is applicable for all model grid scales. It is then shown that if this parametrization of the variability is analytically incorporated into the autoconversion and accretion rate calculations, the bias is significantly reduced.
Resumo:
This paper examines the determinants of cross-platform arbitrage profits. We develop a structural model that enables us to decompose the likelihood of an arbitrage opportunity into three distinct factors: the fixed cost to trade the opportunity, the level at which one of the platforms delays a price update and the impact of the order flow on the quoted prices (inventory and asymmetric information effects). We then investigate the predictions from the theoretical model for the European Bond market with the estimation of a probit model. Our main finding is that the results found in the empirical part corroborate strongly the predictions from the structural model. The event of a cross market arbitrage opportunity has a certain degree of predictability where an optimal ex ante scenario is represented by a low level of spreads on both platforms, a time of the day close to the end of trading hours and a high volume of trade.
Resumo:
Design patterns are a way of sharing evidence-based solutions to educational design problems. The design patterns presented in this paper were produced through a series of workshops, which aimed to identify Massive Open Online Course (MOOC) design principles from workshop participants’ experiences of designing, teaching and learning on these courses. MOOCs present a challenge for the existing pedagogy of online learning, particularly as it relates to promoting peer interaction and discussion. MOOC cohort sizes, participation patterns and diversity of learners mean that discussions can remain superficial, become difficult to navigate, or never develop beyond isolated posts. In addition, MOOC platforms may not provide sufficient tools to support moderation. This paper draws on four case studies of designing and teaching on a range of MOOCs presenting seven design narratives relating to the experience in these MOOCs. Evidence presented in the narratives is abstracted in the form of three design patterns created through a collaborative process using techniques similar to those used in collective autoethnography. The patterns: “Special Interest Discussions”, “Celebrity Touch” and “Look and Engage”, draw together shared lessons and present possible solutions to the problem of creating, managing and facilitating meaningful discussion in MOOCs through the careful use of staged learning activities and facilitation strategies.
Resumo:
Weather, climate, water and related environmental conditions, including air quality, all have profound effects on cities. A growing importance is being attached to understanding and predicting atmospheric conditions and their interactions with other components of the Earth System in cities, at multiple scales. We highlight the need for: (1) development of high-resolution coupled environmental prediction models that include realistic city-specific processes, boundary conditions and fluxes; (2) enhanced observational systems to support (force, constrain, evaluate) these models to provide high quality forecasts for new urban services; (3) provision of meteorological and related environmental variables to aid protection of human health and the environment; (4) new targeted and customized delivery platforms using modern communication techniques, developed with users to ensure that services, advice and warnings result in appropriate action; and (5) development of new skill and capacity to make best use of technologies to deliver new services in complex, challenging and evolving city environments. We highlight the importance of a coordinated and strategic approach that draws on, but does not replicate, past work to maximize benefits to stakeholders.
Resumo:
The well-dated section of Cassis-La Bédoule in the South Provencal Basin (southern France) allows for a detailed reconstruction of palaeoenvironmental change during the latest Barremian and Early Aptian. For this study, phosphorus (P) and clay-mineral contents, stable-isotope ratios on carbonate (δ13Ccarb) and organic matter (δ13Corg), and redox-sensitive trace elements (RSTE: V, U, As, Co, and Mo) have been measured in this historical stratotype. The base of the section consists of rudist limestone, which is attributed to the Urgonian platform. The presence of low P and RSTE content, and content of up to 30% kaolinite indicate deposition under oligotrophic and oxic conditions, and the presence of warm, humid climatic conditions on the adjacent continent. The top of the Urgonian succession is marked by a hardground with encrusted brachiopods and bivalves, which is interpreted as a drowning surface. The section continues with a succession of limestone and marl containing the first occurrence of planktonic foraminifera. This interval includes several laminated, organic-rich layers recording RSTE enrichments and high Corg:Ptot ratios. The deposition of these organic-rich layers was associated with oxygen-depleted conditions and a large positive excursion in δ13Corg. During this interval, a negative peak in the δ13Ccarb record is observed, which dates as latest Barremian. This excursion is coeval with negative excursions elsewhere in Tethyan platform and basin settings and is explained by the increased input of light dissolved inorganic carbon by rivers and/or volcanic activity. In this interval, an increase in P content, owing to reworking of nearshore sediments during the transgression, is coupled with a decrease in kaolinite content, which tends to be deposited in more proximal areas. The overlying hemipelagic sediments of the Early Aptian Deshayesites oglanlensis and D. weissi zones indicate rather stable palaeoenvironmental conditions with low P content and stable δ13C records. A change towards marl-dominated beds occurs close to the end of the D. weissi zone. These beds display a long decrease in their δ13Ccarb and δ13Corg records, which lasted until the end of the Deshayesites deshayesi subzone (corresponding to C3 in Menegatti et al., 1998). This is followed by a positive shift during the Roloboceras hambrovi and Deshayesites grandis subzones, which corresponds in time to oceanic anoxic event (OAE) 1a interval. This positive shift is coeval with two increases in the P content. The marly interval equivalent to OAE 1a lacks organic-rich deposits and RSTE enrichments indicating that oxic conditions prevailed in this particular part of the Tethys ocean. The clay mineralogy is dominated by smectite, which is interpreted to reflect trapping of kaolinite on the surrounding platforms rather than indicating a drier climate.
Resumo:
The Emissions around the M25 motorway (EM25) campaign took place over the megacity of London in the United Kingdom in June 2009 with the aim of characterising trace gas and aerosol composition and properties entering and emitted from the urban region. It featured two mobile platforms, the UK BAe-146 Facility for Airborne Atmospheric Measurements (FAAM) research aircraft and a ground-based mobile lidar van, both travelling in circuits around London, roughly following the path of the M25 motorway circling the city. We present an overview of findings from the project, which took place during typical UK summertime pollution conditions. Emission ratios of volatile organic compounds (VOCs) to acetylene and carbon monoxide emitted from the London region were consistent with measurements in and downwind of other large urban areas and indicated traffic and associated fuel evaporation were major sources. Sub-micron aerosol composition was dominated by secondary species including sulphate (24% of sub-micron mass in the London plume and 29% in the non-plume regional aerosol), nitrate (24% plume; 20% regional) and organic aerosol (29% plume; 31% regional). The primary sub-micron aerosol emissions from London were minor compared to the larger regional background, with only limited increases in aerosol mass in the urban plume compared to the background (~12% mass increase on average). Black carbon mass was the major exception and more than doubled in the urban plume, leading to a decrease in the single scattering albedo from 0.91 in the regional aerosol to 0.86 in the London plume, on average. Our observations indicated that regional aerosol plays a major role on aerosol concentrations around London, at least during typical summertime conditions, meaning future efforts to reduce PM levels in London must account for regional as well as local aerosol sources.
Resumo:
Studies on learning management systems have largely been technical in nature with an emphasis on the evaluation of the human computer interaction (HCI) processes in using the LMS. This paper reports a study that evaluates the information interaction processes on an eLearning course used in teaching an applied Statistics course. The eLearning course is used as a synonym for information systems. The study explores issues of missing context in stored information in information systems. Using the semiotic framework as a guide, the researchers evaluated an existing eLearning course with the view to proposing a model for designing improved eLearning courses for future eLearning programmes. In this exploratory study, a survey questionnaire is used to collect data from 160 participants on an eLearning course in Statistics in Applied Climatology. The views of the participants are analysed with a focus on only the human information interaction issues. Using the semiotic framework as a guide, syntactic, semantic, pragmatic and social context gaps or problems were identified. The information interactions problems identified include ambiguous instructions, inadequate information, lack of sound, interface design problems among others. These problems affected the quality of new knowledge created by the participants. The researchers thus highlighted the challenges of missing information context when data is stored in an information system. The study concludes by proposing a human information interaction model for improving the information interaction quality issues in the design of eLearning course on learning management platforms and those other information systems.
Resumo:
Adaptive governance is the use of novel approaches within policy to support experimentation and learning. Social learning reflects the engagement of interdependent stakeholders within this learning. Much attention has focused on these concepts as a solution for resilience in governing institutions in an uncertain climate; resilience representing the ability of a system to absorb shock and to retain its function and form through reorganisation. However, there are still many questions to how these concepts enable resilience, particularly in vulnerable, developing contexts. A case study from Uganda presents how these concepts promote resilient livelihood outcomes among rural subsistence farmers within a decentralised governing framework. This approach has the potential to highlight the dynamics and characteristics of a governance system which may manage change. The paper draws from the enabling characteristics of adaptive governance, including lower scale dynamics of bonding and bridging ties and strong leadership. Central to these processes were learning platforms promoting knowledge transfer leading to improved self-efficacy, innovation and livelihood skills. However even though aspects of adaptive governance were identified as contributing to resilience in livelihoods, some barriers were identified. Reflexivity and multi-stakeholder collaboration were evident in governing institutions; however, limited self-organisation and vertical communication demonstrated few opportunities for shifts in governance, which was severely challenged by inequity, politicisation and elite capture. The paper concludes by outlining implications for climate adaptation policy through promoting the importance of mainstreaming adaptation alongside existing policy trajectories; highlighting the significance of collaborative spaces for stakeholders and the tackling of inequality and corruption.
Resumo:
International non-governmental organisations (NGOs) are powerful political players who aim to influence global society. In order to be effective on a global scale, they must communicate their goals and achievements in different languages. Translation and translation policy play an essential role here. Despite NGOs’ important position in politics and society, not much is known about how these organisations, who often have limited funds available, organise their translation work. This study aims to contribute to Translation Studies, and more specifically to investigating institutional translation, by exploring translation policies at Amnesty International, one of the most successful and powerful human rights NGOs around the world. Translation policy is understood as comprising three components: translation management, translation practices, and translation beliefs, based on Spolsky’s study of language policy (2004). The thesis investigates how translation is organised and what kind of policies different Amnesty offices have in place, and how this is reflected in their translation products. The thesis thus also pursues how translation and translation policy impact on the organisation’s message and voice as it is spread around the world. An ethnographic approach is used for the analysis of various data sets that were collected during fieldwork. These include policy documents, guidelines on writing and translation, recorded interviews, e-mail correspondence, and fieldnotes. The thesis at first explores Amnesty’s global translation policy, and then presents the results of a comparative analysis of local translation policies at two concrete institutions: Amnesty International Language Resource Centre in Paris (AILRC-FR) and Amnesty International Vlaanderen (AIVL). A corpus of English source texts and Dutch (AIVL) and French (AILRC-FR) target texts are analysed. The findings of the analysis of translation policies and of the translation products are then combined to illustrate how translation impacts on Amnesty’s message and voice. The research results show that there are large differences in how translation is organised depending on the local office and the language(s), and that this also influences the way in which Amnesty’s message and voice are represented. For Dutch and French specifically, translation policies and translation products differ considerably. The thesis describes how these differences are often the result of different beliefs and assumptions relating to translation, and that staff members within Amnesty are not aware of the different conceptions of translation that exist within Amnesty International as a formal institution. Organising opportunities where translation can be discussed (meetings, workshops, online platforms) can help in reducing such differences. The thesis concludes by suggesting that an increased awareness of these issues will enable Amnesty to make more effective use of translation in its fight against human rights violations.
Resumo:
In an era of fragmenting audience and diversified viewing platforms, youth television needs to move fast and make a lot of noise in order to capture and maintain the attention of the teenage viewer. British ensemble youth drama Skins (E4, 2007-2013) calls attention to itself with its high doses of drugs, chaotic parties and casual attitudes towards sexuality. It also moves quickly, shedding its cast every two seasons as they graduate from school, then renewing itself with a fresh generation of 16 year old characters - three cycles in total. This essay will explore the challenges of maintaining audience connections whilst resetting the narrative clock with each cycle. I suggest that the development of the Skins brand was key to the programme’s success. Branding is particularly important for an audience demographic who increasingly consume their television outside of broadcast flow and essential for a programme which renews its cast every two years. The Skins brand operate as a framework, as the central audience draw, have the strength to maintain audience connections when it ‘graduates’ those characters they identify with at the close of each cycle and starts again from scratch. This essay will explore how the Skins brand constructs a cohesive identity across its multiple generations, yet also consider how the cyclic form poses challenges for the programme’s representations and narratives. This cyclic form allows Skins to repeatedly reach out to a new audience who comes of age alongside each new generation and to reflect shifts in British youth culture. Thus Skins remains ever-youthful, seeking to maintain an at times painfully hip identity. Yet the programme has a somewhat schizophrenic identity, torn between its roots in British realist drama and surrealist comedy and an escapist aspirational glamour that shows the influence of US Teen TV. This combination results in a tendency towards a heightened melodrama at odds with Skins claims for authenticity - its much vaunted teenage advisors and young writers - with the cyclic structure serving to amplify the programme’s excessive tendencies. Each cycle wrestles with a need for continuity and familiarity - partly maintained through brand, aesthetic and setting - yet a desire for freshness and originality, to assert difference from what has gone before. I suggest that the inevitable need for each cycle to ‘top’ what has gone before results in a move away from character-based intimacy and the everyday to high-stakes drama and violence which sits uncomfortably within British youth television.