61 resultados para blended workflow


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this paper is to explore the implementation of online learning in distance educational delivery at Yellow Fields University (pseudonymous) in Sri Lanka. The implementation of online distance education at the University included the use of blended learning. The policy initiative to introduce online for distance education in Sri Lanka was guided by the expectation of cost reduction and the implementation was financed under the Distance Education Modernization Project. The paper presents one case study of a larger multiple case study research that employed an ethnographic research approach in investigating the impact of ICT on distance education in Sri Lanka. Documents, questionnaires and qualitative interviews were used for data collection. There was a significant positive relationship between ownership of computers and students’ ability to use computer for word processing, emailing and Web searching. The lack of access to computers and the Internet, the lack of infrastructure, low levels of computer literacy, the lack of local language content, and the lack of formal student support services at the University were found to be major barriers to implementing compulsory online activities at the University

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Users’ requirements change drives an information system evolution. Consequently, such evolution affects those atomic services which provide functional operations from one state of their composition to another state of composition. A challenging issue associated with such evolution of the state of service composition is to ensure a resultant service composition remaining rational. This paper presents a method of Service Composition Atomic-Operation Set (SCAOS). SCAOS defines 2 classes of atomic operations and 13 kinds of basic service compositions to aid a state change process by using Workflow Net. The workflow net has algorithmic capabilities to compose the required services with rationality and maintain any changes to the services in a different composition also rational. This method can improve the adaptability to the ever changing business requirements of information systems in the dynamic environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many studies have widely accepted the assumption that learning processes can be promoted when teaching styles and learning styles are well matched. In this study, the synergy between learning styles, learning patterns, and gender as a selected demographic feature and learners’ performance were quantitatively investigated in a blended learning setting. This environment adopts a traditional teaching approach of ‘one-sizefits-all’ without considering individual user’s preferences and attitudes. Hence, evidence can be provided about the value of taking such factors into account in Adaptive Educational Hypermedia Systems (AEHSs). Felder and Soloman’s Index of Learning Styles (ILS) was used to identify the learning styles of 59 undergraduate students at the University of Babylon. Five hypotheses were investigated in the experiment. Our findings show that there is no statistical significance in some of the assessed factors. However, processing dimension, the total number of hits on course website and gender indicated a statistical significance on learners’ performance. This finding needs more investigation in order to identify the effective factors on students’ achievement to be considered in Adaptive Educational Hypermedia Systems (AEHSs).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: In many experimental pipelines, clustering of multidimensional biological datasets is used to detect hidden structures in unlabelled input data. Taverna is a popular workflow management system that is used to design and execute scientific workflows and aid in silico experimentation. The availability of fast unsupervised methods for clustering and visualization in the Taverna platform is important to support a data-driven scientific discovery in complex and explorative bioinformatics applications. Results: This work presents a Taverna plugin, the Biological Data Interactive Clustering Explorer (BioDICE), that performs clustering of high-dimensional biological data and provides a nonlinear, topology preserving projection for the visualization of the input data and their similarities. The core algorithm in the BioDICE plugin is Fast Learning Self Organizing Map (FLSOM), which is an improved variant of the Self Organizing Map (SOM) algorithm. The plugin generates an interactive 2D map that allows the visual exploration of multidimensional data and the identification of groups of similar objects. The effectiveness of the plugin is demonstrated on a case study related to chemical compounds. Conclusions: The number and variety of available tools and its extensibility have made Taverna a popular choice for the development of scientific data workflows. This work presents a novel plugin, BioDICE, which adds a data-driven knowledge discovery component to Taverna. BioDICE provides an effective and powerful clustering tool, which can be adopted for the explorative analysis of biological datasets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper seeks to increase the understanding of the performance implications for investors who choose to combine an unlisted real estate portfolio (in this case German Spezialfonds) with a (global) listed real estate element. We call this a “blended” approach to real estate allocations. For the avoidance of doubt, in this paper we are dealing purely with real estate equity (listed and unlisted) allocations, and do not incorporate real estate debt (listed or unlisted) or direct property into the process. A previous paper (Moss and Farrelly 2014) showed the benefits of the blended approach as it applied to UK Defined Contribution Pension Schemes. The catalyst for this paper has been the recent attention focused on German pension fund allocations, which have a relatively low (real estate) equity content, and a high bond content. We have used the MSCI Spezialfonds Index as a proxy for domestic German institutional real estate allocations, and the EPRA Global Developed Index as a proxy for a global listed real estate allocation. We also examine whether a rules based trading strategy, in this case Trend Following, can improve the risk adjusted returns above those of a simple buy and hold strategy for our sample period 2004-2015. Our findings are that by blending a 30% global listed portfolio with a 70% allocation (as opposed to a typical 100% weighting) to Spezialfonds, the real estate allocation returns increase from 2.88% p.a. to 5.42% pa. Volatility increases, but only to 6.53%., but there is a noticeable impact on maximum drawdown which increases to 19.4%. By using a Trend Following strategy raw returns are improved from 2.88% to 6.94% p.a. , The Sharpe Ratio increases from 1.05 to 1.49 and the Maximum Drawdown ratio is now only 1.83% compared to 19.4% using a buy and hold strategy . Finally, adding this (9%) real estate allocation to a mixed asset portfolio allocation typical for German pension funds there is an improvement in both the raw return (from 7.66% to 8.28%) and the Sharpe Ratio (from 0.91 to 0.98).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The level of agreement between climate model simulations and observed surface temperature change is a topic of scientific and policy concern. While the Earth system continues to accumulate energy due to anthropogenic and other radiative forcings, estimates of recent surface temperature evolution fall at the lower end of climate model projections. Global mean temperatures from climate model simulations are typically calculated using surface air temperatures, while the corresponding observations are based on a blend of air and sea surface temperatures. This work quantifies a systematic bias in model-observation comparisons arising from differential warming rates between sea surface temperatures and surface air temperatures over oceans. A further bias arises from the treatment of temperatures in regions where the sea ice boundary has changed. Applying the methodology of the HadCRUT4 record to climate model temperature fields accounts for 38% of the discrepancy in trend between models and observations over the period 1975–2014.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them also involves complicated workflows implemented as shell scripts. A new grid middleware system that is well suited to climate modelling applications is presented in this paper. Grid Remote Execution (G-Rex) allows climate models to be deployed as Web services on remote computer systems and then launched and controlled as if they were running on the user's own computer. Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model. G-Rex has a REST architectural style, featuring a Java client program that can easily be incorporated into existing scientific workflow scripts. Some technical details of G-Rex are presented, with examples of its use by climate modellers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them usually involves complicated workflows implemented as shell scripts. For example, NEMO (Smith et al. 2008) is a state-of-the-art ocean model that is used currently for operational ocean forecasting in France, and will soon be used in the UK for both ocean forecasting and climate modelling. On a typical modern cluster, a particular one year global ocean simulation at 1-degree resolution takes about three hours when running on 40 processors, and produces roughly 20 GB of output as 50000 separate files. 50-year simulations are common, during which the model is resubmitted as a new job after each year. Running NEMO relies on a set of complicated shell scripts and command utilities for data pre-processing and post-processing prior to job resubmission. Grid Remote Execution (G-Rex) is a pure Java grid middleware system that allows scientific applications to be deployed as Web services on remote computer systems, and then launched and controlled as if they are running on the user's own computer. Although G-Rex is general purpose middleware it has two key features that make it particularly suitable for remote execution of climate models: (1) Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model; (2) The client component is a command-line program that can easily be incorporated into existing model work-flow scripts. G-Rex has a REST (Fielding, 2000) architectural style, which allows client programs to be very simple and lightweight and allows users to interact with model runs using only a basic HTTP client (such as a Web browser or the curl utility) if they wish. This design also allows for new client interfaces to be developed in other programming languages with relatively little effort. The G-Rex server is a standard Web application that runs inside a servlet container such as Apache Tomcat and is therefore easy to install and maintain by system administrators. G-Rex is employed as the middleware for the NERC1 Cluster Grid, a small grid of HPC2 clusters belonging to collaborating NERC research institutes. Currently the NEMO (Smith et al. 2008) and POLCOMS (Holt et al, 2008) ocean models are installed, and there are plans to install the Hadley Centre’s HadCM3 model for use in the decadal climate prediction project GCEP (Haines et al., 2008). The science projects involving NEMO on the Grid have a particular focus on data assimilation (Smith et al. 2008), a technique that involves constraining model simulations with observations. The POLCOMS model will play an important part in the GCOMS project (Holt et al, 2008), which aims to simulate the world’s coastal oceans. A typical use of G-Rex by a scientist to run a climate model on the NERC Cluster Grid proceeds as follows :(1) The scientist prepares input files on his or her local machine. (2) Using information provided by the Grid’s Ganglia3 monitoring system, the scientist selects an appropriate compute resource. (3) The scientist runs the relevant workflow script on his or her local machine. This is unmodified except that calls to run the model (e.g. with “mpirun”) are simply replaced with calls to "GRexRun" (4) The G-Rex middleware automatically handles the uploading of input files to the remote resource, and the downloading of output files back to the user, including their deletion from the remote system, during the run. (5) The scientist monitors the output files, using familiar analysis and visualization tools on his or her own local machine. G-Rex is well suited to climate modelling because it addresses many of the middleware usability issues that have led to limited uptake of grid computing by climate scientists. It is a lightweight, low-impact and easy-to-install solution that is currently designed for use in relatively small grids such as the NERC Cluster Grid. A current topic of research is the use of G-Rex as an easy-to-use front-end to larger-scale Grid resources such as the UK National Grid service.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Context: Learning can be regarded as knowledge construction in which prior knowledge and experience serve as basis for the learners to expand their knowledge base. Such a process of knowledge construction has to take place continuously in order to enhance the learners’ competence in a competitive working environment. As the information consumers, the individual users demand personalised information provision which meets their own specific purposes, goals, and expectations. Objectives: The current methods in requirements engineering are capable of modelling the common user’s behaviour in the domain of knowledge construction. The users’ requirements can be represented as a case in the defined structure which can be reasoned to enable the requirements analysis. Such analysis needs to be enhanced so that personalised information provision can be tackled and modelled. However, there is a lack of suitable modelling methods to achieve this end. This paper presents a new ontological method for capturing individual user’s requirements and transforming the requirements onto personalised information provision specifications. Hence the right information can be provided to the right user for the right purpose. Method: An experiment was conducted based on the qualitative method. A medium size of group of users participated to validate the method and its techniques, i.e. articulates, maps, configures, and learning content. The results were used as the feedback for the improvement. Result: The research work has produced an ontology model with a set of techniques which support the functions for profiling user’s requirements, reasoning requirements patterns, generating workflow from norms, and formulating information provision specifications. Conclusion: The current requirements engineering approaches provide the methodical capability for developing solutions. Our research outcome, i.e. the ontology model with the techniques, can further enhance the RE approaches for modelling the individual user’s needs and discovering the user’s requirements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Students may have difficulty in understanding some of the complex concepts which they have been taught in the general areas of science and engineering. Whilst practical work such as a laboratory based examination of the performance of structures has an important role in knowledge construction this does have some limitations. Blended learning supports different learning styles, hence further benefits knowledge building. This research involves an empirical study of how vodcasts (video-podcasts) can be used to enrich learning experience in the structural properties of materials laboratory of an undergraduate course. Students were given the opportunity of downloading and viewing the vodcasts on the theory before and after the experimental work. It is the choice of the students when (before or after, before and after) and how many times they would like to view the vodcasts. In blended learning, the combination of face-to-face teaching, vodcasts, printed materials, practical experiments, writing reports and instructors’ feedbacks benefits different learning styles of the learners. For the preparation of the practical, the students were informed about the availability of the vodcasts prior to the practical session. After the practical work, students submitted an individual laboratory report for the assessment of the structures laboratory. The data collection consisted of a questionnaire completed by the students, follow-up semi-structured interviews and the practical reports submitted by them for assessment. The results from the questionnaire were analysed quantitatively, whilst the data from the assessment reports were analysed qualitatively. The analysis shows that most of the students who have not fully grasped the theory after the practical, managed to gain the required knowledge by viewing the vodcasts. According to their feedbacks, the students felt that they have control over how to use the material and to view it as many times as they wish. Some students who have understood the theory may choose to view it once or not at all. Their understanding was demonstrated by their explanations in their reports, and was illustrated by the approach they took to explicate the results of their experimental work. The research findings are valuable to instructors who design, develop and deliver different types of blended learning, and are beneficial to learners who try different blended approaches. Recommendations were made on the role of the innovative application of vodcasts in the knowledge construction for structures laboratory and to guide future work in this area of research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a case study of an electronic data management system developed in-house by the Facilities Management Directorate (FMD) of an educational institution in the UK. The FMD Maintenance and Business Services department is responsible for the maintenance of the built-estate owned by the university. The department needs to have a clear definition of the type of work undertaken and the administration that enables any maintenance work to be carried out. These include the management of resources, budget, cash flow and workflow of reactive, preventative and planned maintenance of the campus. In order to be more efficient in supporting the business process, the FMD had decided to move from a paper-based information system to an electronic system, WREN, to support the business process of the FMD. Some of the main advantages of WREN are that it is tailor-made to fit the purpose of the users; it is cost effective when it comes to modifications on the system; and the database can also be used as a knowledge management tool. There is a trade-off; as WREN is tailored to the specific requirements of the FMD, it may not be easy to implement within a different institution without extensive modifications. However, WREN is successful in not only allowing the FMD to carry out the tasks of maintaining and looking after the built-estate of the university, but also has achieved its aim to minimise costs and maximise efficiency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The effect of poultry species (broiler or turkey) and genotype (Wrolstad or BUT T8 turkeys and Ross 308 or Cobb 500 broilers) on the efficiency with which dietary longchain n-3 PUFA were incorporated into poultry meat was determined. Broilers and turkeys of both genotypes were fed one of six diets varying in FA composition (two replicates per genotype x diet interaction). Diets contained 50 g/kg added oil, which was either blended vegetable oil (control), or partially replaced with linseed oil (20 or 40 g/kg diet), fish oil (20 or 40 g/kg diet), or a mixture of the two (20 g linseed oil and 20 g fish oil/kg diet). Feeds and samples of skinless breast and thigh meat were analyzed for FA. Wrolstad dark meat was slightly more responsive than BUT T8 (P = 0.046) to increased dietary 18:3 concentrations (slopes of 0.570 and 0.465, respectively). The Ross 308 was also slightly more responsive than the Cobb 500 (P= 0.002) in this parameter (slopes of 0.557 and 0.449). There were no other significant differences between the genotypes. There was some evidence (based on the estimates of the slopes and their associated standard errors) that white turkey meat was more responsive than white chicken meat to 20:5 (slopes of 0.504 and 0.289 for turkeys and broilers, respectively). There was no relationship between dietary 18:3 n-3 content and meat 20:5 and 22:6 contents. If birds do convert 18:3 to higher FA, these acids are not then deposited in the edible tissues.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stable isotope labeling combined with MS is a powerful method for measuring relative protein abundances, for instance, by differential metabolic labeling of some or all amino acids with 14N and 15N in cell culture or hydroponic media. These and most other types of quantitative proteomics experiments using high-throughput technologies, such as LC-MS/MS, generate large amounts of raw MS data. This data needs to be processed efficiently and automatically, from the mass spectrometer to statistically evaluated protein identifications and abundance ratios. This paper describes in detail an approach to the automated analysis of uniformly 14N/15N-labeled proteins using MASCOT peptide identification in conjunction with the trans-proteomic pipeline (TPP) and a few scripts to integrate the analysis workflow. Two large proteomic datasets from uniformly labeled Arabidopsis thaliana were used to illustrate the analysis pipeline. The pipeline can be fully automated and uses only common or freely available software.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stable isotope labeling combined with MS is a powerful method for measuring relative protein abundances, for instance, by differential metabolic labeling of some or all amino acids with N-14 and N-15 in cell culture or hydroponic media. These and most other types of quantitative proteomics experiments using high-throughput technologies, such as LC-MS/MS, generate large amounts of raw MS data. This data needs to be processed efficiently and automatically, from the mass spectrometer to statistically evaluated protein identifications and abundance ratios. This paper describes in detail an approach to the automated analysis of Uniformly N-14/N-15-labeled proteins using MASCOT peptide identification in conjunction with the trans-proteomic pipeline (TPP) and a few scripts to integrate the analysis workflow. Two large proteomic datasets from uniformly labeled Arabidopsis thaliana were used to illustrate the analysis pipeline. The pipeline can be fully automated and uses only common or freely available software.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Web service composition can be facilitated by an automatic process which consists of rules, conditions and actions. This research has adapted ElementaryPetri Net (EPN) to analyze and model the web services and their composition. This paper describes a set of techniques for representing transition rules, algorithm and workflow that web service composition can be automatically carried out.