929 resultados para Scripts
Resumo:
El trabajo describe el proyecto de desarrollo de un SIG 3D de código abierto para dispositivos móviles (Apple-iOS y Android) y para navegadores web con tecnología WebGL. En la fase actual, nos centraremos en el diseño e implementación del globo virtual, como elemento esencial que da soporte al SIG 3D y de una IDE que permite la programación de nuevas funcionalidades al globo. Dentro de los objetivos de diseño del globo virtual tenemos (i) simplicidad, con código estructurado que facilita la portabilidad y con una API de código abierto sencilla, (ii) eficiencia, tomando en cuenta los recursos hardware de los dispositivos móviles más extendidos en el mercado, (ii) usabilidad, implementando una navegación intuitiva mediante gestos para la interacción en pantalla y (iv) escalabilidad, gracias a una API desarrollada, se permite aumentar de las prestaciones mediante el desarrollo de scripts y podrán ser ejecutados tanto dentro del navegador web como de forma nativa en las plataformas móviles. Ante un panorama de clara proliferación de aplicaciones para móviles, Glob3 Mobile pretende ser una apuesta fuerte que llegue a convertirse en un SIG 3D de código abierto que abarque variadas aplicaciones sectoriales, algunas ya en marcha
Resumo:
O presente trabalho discutiu as práticas de ensino da Língua Portuguesa nos Cursos Técnicos Integrados de Nível Médio em Agropecuária e Agroindústria do Instituto Federal de Sergipe – IFS / Campus São Cristóvão. Ao longo da investigação, realizada durante o ano de 2010, foram aplicados roteiros de entrevistas a alunos e professores da disciplina no instituto, com o propósito de mensurar e qualificar o índice de aproveitamento dos estudantes na aquisição da competência textual (produção escrita). Cabe destacar que esse procedimento foi antecedido de uma pesquisa exploratória, a qual norteou os rumos da investigação. Esta, por sua vez, teve por objetivo maior propor, mediante confronto com os resultados da prática de produção escrita dos alunos e dos documentos em arquivos, um quadro de estratégias para o desenvolvimento da referida competência. Na análise e discussão dos dados coletados, pôdese perceber que, no cotidiano das atividades de aprendizado da língua materna, muitos dos nossos alunos apresentaram-se interessados pelas práticas desenvolvidas, modo contínuo, de um lado; de outro, alguns manifestaram resistência às práticas de produção oral e escrita. Ademais, pôde-se igualmente perceber que os estudantes compreendem e “avaliam” as dificuldades por que passa o Instituto e sugerem a adoção de novas metodologias por parte dos professores. Como consequência do estudo, constatou-se que já existe um domínio de produção escrita por parte desses estudantes, todavia, a investigação detectou a necessidade do aprimoramento das técnicas e práticas em sala de aula, com o fim de melhorar o desempenho dos estudantes.
Resumo:
Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them also involves complicated workflows implemented as shell scripts. A new grid middleware system that is well suited to climate modelling applications is presented in this paper. Grid Remote Execution (G-Rex) allows climate models to be deployed as Web services on remote computer systems and then launched and controlled as if they were running on the user's own computer. Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model. G-Rex has a REST architectural style, featuring a Java client program that can easily be incorporated into existing scientific workflow scripts. Some technical details of G-Rex are presented, with examples of its use by climate modellers.
Resumo:
G-Rex is light-weight Java middleware that allows scientific applications deployed on remote computer systems to be launched and controlled as if they are running on the user's own computer. G-Rex is particularly suited to ocean and climate modelling applications because output from the model is transferred back to the user while the run is in progress, which prevents the accumulation of large amounts of data on the remote cluster. The G-Rex server is a RESTful Web application that runs inside a servlet container on the remote system, and the client component is a Java command line program that can easily be incorporated into existing scientific work-flow scripts. The NEMO and POLCOMS ocean models have been deployed as G-Rex services in the NERC Cluster Grid, and G-Rex is the core grid middleware in the GCEP and GCOMS e-science projects.
Resumo:
Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them usually involves complicated workflows implemented as shell scripts. For example, NEMO (Smith et al. 2008) is a state-of-the-art ocean model that is used currently for operational ocean forecasting in France, and will soon be used in the UK for both ocean forecasting and climate modelling. On a typical modern cluster, a particular one year global ocean simulation at 1-degree resolution takes about three hours when running on 40 processors, and produces roughly 20 GB of output as 50000 separate files. 50-year simulations are common, during which the model is resubmitted as a new job after each year. Running NEMO relies on a set of complicated shell scripts and command utilities for data pre-processing and post-processing prior to job resubmission. Grid Remote Execution (G-Rex) is a pure Java grid middleware system that allows scientific applications to be deployed as Web services on remote computer systems, and then launched and controlled as if they are running on the user's own computer. Although G-Rex is general purpose middleware it has two key features that make it particularly suitable for remote execution of climate models: (1) Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model; (2) The client component is a command-line program that can easily be incorporated into existing model work-flow scripts. G-Rex has a REST (Fielding, 2000) architectural style, which allows client programs to be very simple and lightweight and allows users to interact with model runs using only a basic HTTP client (such as a Web browser or the curl utility) if they wish. This design also allows for new client interfaces to be developed in other programming languages with relatively little effort. The G-Rex server is a standard Web application that runs inside a servlet container such as Apache Tomcat and is therefore easy to install and maintain by system administrators. G-Rex is employed as the middleware for the NERC1 Cluster Grid, a small grid of HPC2 clusters belonging to collaborating NERC research institutes. Currently the NEMO (Smith et al. 2008) and POLCOMS (Holt et al, 2008) ocean models are installed, and there are plans to install the Hadley Centre’s HadCM3 model for use in the decadal climate prediction project GCEP (Haines et al., 2008). The science projects involving NEMO on the Grid have a particular focus on data assimilation (Smith et al. 2008), a technique that involves constraining model simulations with observations. The POLCOMS model will play an important part in the GCOMS project (Holt et al, 2008), which aims to simulate the world’s coastal oceans. A typical use of G-Rex by a scientist to run a climate model on the NERC Cluster Grid proceeds as follows :(1) The scientist prepares input files on his or her local machine. (2) Using information provided by the Grid’s Ganglia3 monitoring system, the scientist selects an appropriate compute resource. (3) The scientist runs the relevant workflow script on his or her local machine. This is unmodified except that calls to run the model (e.g. with “mpirun”) are simply replaced with calls to "GRexRun" (4) The G-Rex middleware automatically handles the uploading of input files to the remote resource, and the downloading of output files back to the user, including their deletion from the remote system, during the run. (5) The scientist monitors the output files, using familiar analysis and visualization tools on his or her own local machine. G-Rex is well suited to climate modelling because it addresses many of the middleware usability issues that have led to limited uptake of grid computing by climate scientists. It is a lightweight, low-impact and easy-to-install solution that is currently designed for use in relatively small grids such as the NERC Cluster Grid. A current topic of research is the use of G-Rex as an easy-to-use front-end to larger-scale Grid resources such as the UK National Grid service.
Resumo:
Stable isotope labeling combined with MS is a powerful method for measuring relative protein abundances, for instance, by differential metabolic labeling of some or all amino acids with 14N and 15N in cell culture or hydroponic media. These and most other types of quantitative proteomics experiments using high-throughput technologies, such as LC-MS/MS, generate large amounts of raw MS data. This data needs to be processed efficiently and automatically, from the mass spectrometer to statistically evaluated protein identifications and abundance ratios. This paper describes in detail an approach to the automated analysis of uniformly 14N/15N-labeled proteins using MASCOT peptide identification in conjunction with the trans-proteomic pipeline (TPP) and a few scripts to integrate the analysis workflow. Two large proteomic datasets from uniformly labeled Arabidopsis thaliana were used to illustrate the analysis pipeline. The pipeline can be fully automated and uses only common or freely available software.
Resumo:
Stable isotope labeling combined with MS is a powerful method for measuring relative protein abundances, for instance, by differential metabolic labeling of some or all amino acids with N-14 and N-15 in cell culture or hydroponic media. These and most other types of quantitative proteomics experiments using high-throughput technologies, such as LC-MS/MS, generate large amounts of raw MS data. This data needs to be processed efficiently and automatically, from the mass spectrometer to statistically evaluated protein identifications and abundance ratios. This paper describes in detail an approach to the automated analysis of Uniformly N-14/N-15-labeled proteins using MASCOT peptide identification in conjunction with the trans-proteomic pipeline (TPP) and a few scripts to integrate the analysis workflow. Two large proteomic datasets from uniformly labeled Arabidopsis thaliana were used to illustrate the analysis pipeline. The pipeline can be fully automated and uses only common or freely available software.
Resumo:
The constructivist model of 'soft' value management (VM) is contrasted with the VM discourse appropriated by cost consultants who operate from within UK quantity surveying (QS) practices. The enactment of VM by cost consultants is shaped by the institutional context within which they operate and is not necessarily representative of VM practice per se. Opportunities to perform VM during the formative stages of design are further constrained by the positivistic rhetoric that such practitioners use to conceptualize and promote their services. The complex interplay between VM theory and practice is highlighted and analysed from a non-deterministic perspective. Codified models of 'best practice' are seen to be socially constructed and legitimized through human interaction in the context of interorganizational networks. Published methodologies are seen to inform practice in only a loose and indirect manner, with extensive scope for localized improvization. New insights into the relationship between VM theory and practice are derived from the dramaturgical metaphor. The social reality of VM is seen to be constituted through scripts and performances, both of which are continuously contested across organizational arenas. It is concluded that VM defies universal definition and is conceptualized and enacted differently across different localized contexts.
Resumo:
This article describes two studies. The first study was designed to investigate the ways in which the statutory assessments of reading for 11-year-old children in England assess inferential abilities. The second study was designed to investigate the levels of performance achieved in these tests in 2001 and 2002 by 11-year-old children attending state-funded local authority schools in one London borough. In the first study, content and questions used in the reading papers for the Standard Assessment Tasks (SATs) in the years 2001 and 2002 were analysed to see what types of inference were being assessed. This analysis suggested that the complexity involved in inference making and the variety of inference types that are made during the reading process are not adequately sampled in the SATs. Similar inadequacies are evident in the ways in which the programmes of study for literacy recommended by central government deal with inference. In the second study, scripts of completed SATs reading papers for 2001 and 2002 were analysed to investigate the levels of inferential ability evident in scripts of children achieving different SATs levels. The analysis in this article suggests that children who only just achieve the 'target' Level 4 do so with minimal use of inference skills. They are particularly weak in making inferences that require the application of background knowledge. Thus, many children who achieve the reading level (Level 4) expected of 11-year-olds are entering secondary education with insecure inference-making skills that have not been recognised.
Resumo:
Once unit-cell dimensions have been determined from a powder diffraction data set and therefore the crystal system is known (e.g. orthorhombic), the method presented by Markvardsen, David, Johnson & Shankland [Acta Cryst. (2001), A57, 47-54] can be used to generate a table ranking the extinction symbols of the given crystal system according to probability. Markvardsen et al. tested a computer program (ExtSym) implementing the method against Pawley refinement outputs generated using the TF12LS program [David, Ibberson & Matthewman (1992). Report RAL-92-032. Rutherford Appleton Laboratory, Chilton, Didcot, Oxon, UK]. Here, it is shown that ExtSym can be used successfully with many well known powder diffraction analysis packages, namely DASH [David, Shankland, van de Streek, Pidcock, Motherwell & Cole (2006). J. Appl. Cryst. 39, 910-915], FullProf [Rodriguez-Carvajal (1993). Physica B, 192, 55-69], GSAS [Larson & Von Dreele (1994). Report LAUR 86-748. Los Alamos National Laboratory, New Mexico, USA], PRODD [Wright (2004). Z. Kristallogr. 219, 1-11] and TOPAS [Coelho (2003). Bruker AXS GmbH, Karlsruhe, Germany]. In addition, a precise description of the optimal input for ExtSym is given to enable other software packages to interface with ExtSym and to allow the improvement/modification of existing interfacing scripts. ExtSym takes as input the powder data in the form of integrated intensities and error estimates for these intensities. The output returned by ExtSym is demonstrated to be strongly dependent on the accuracy of these error estimates and the reason for this is explained. ExtSym is tested against a wide range of data sets, confirming the algorithm to be very successful at ranking the published extinction symbol as the most likely. (C) 2008 International Union of Crystallography Printed in Singapore - all rights reserved.
Resumo:
Typically, the relationship between insect development and temperature is described by two characteristics: the minimum temperature needed for development to occur (T-min) and the number of day degrees required (DDR) for the completion of development. We investigated these characteristics in three English populations of Thrips major and T tabaci [Cawood, Yorkshire (N53degrees49', W1degrees7'); Boxworth, Cambridgeshire (N52degrees15', W0degrees1'); Silwood Park, Berkshire (N51degrees24', W0degrees38')], and two populations of Frankliniella occidentalis (Cawood; Silwood Park). While there were no significant differences among populations in either T-min (mean for T major = 7.0degreesC; T tabaci = 5.9degreesC; F. occidentalis = 6.7degreesC) or DDR (mean for T major = 229.9; T tabaci = 260.8; F occidentalis = 233.4), there were significant differences in the relationship between temperature and body size, suggesting the presence of geographic variation in this trait. Using published data, in addition to those newly collected, we found a negative relationship between T-min. and DDR for F occidentalis and T tabaci, supporting the hypothesis that a trade-off between T-min and DDR may constrain adaptation to local climatic conditions.
Resumo:
A review of current risk pricing practices in the financial, insurance and construction sectors is conducted through a comprehensive literature review. The purpose was to inform a study on risk and price in the tendering processes of contractors: specifically, how contractors take account of risk when they are calculating their bids for construction work. The reference to mainstream literature was in view of construction management research as a field of application rather than a fundamental academic discipline. Analytical models are used for risk pricing in the financial sector. Certain mathematical laws and principles of insurance are used to price risk in the insurance sector. construction contractors and practitioners are described to traditionally price allowances for project risk using mechanisms such as intuition and experience. Project risk analysis models have proliferated in recent years. However, they are rarely used because of problems practitioners face when confronted with them. A discussion of practices across the three sectors shows that the construction industry does not approach risk according to the sophisticated mechanisms of the two other sectors. This is not a poor situation in itself. However, knowledge transfer from finance and insurance can help construction practitioners. But also, formal risk models for contractors should be informed by the commercial exigencies and unique characteristics of the construction sector.
Resumo:
A case study on the tendering process and cost/time performance of a public building project in Ghana is conducted. Competitive bids submitted by five contractors for the project, in which contractors were required to prepare their own quantities, were analyzed to compare differences in their pricing levels and risk/requirement perceptions. Queries sent to the consultants at the tender stage were also analyzed to identify the significant areas of concern to contractors in relation to the tender documentation. The five bidding prices were significantly different. The queries submitted for clarifications were significantly different, although a few were similar. Using a before-and-after experiment, the expected cost/time estimate at the start of the project was compared to the actual cost/time values, i.e. what happened in the actual construction phase. The analysis showed that the project exceeded its expected cost by 18% and its planned time by 210%. Variations and inadequate design were the major reasons. Following an exploration of these issues, an alternative tendering mechanism is recommended to clients. A shift away from the conventional approach of awarding work based on price, and serious consideration of alternative procurement routes can help clients in Ghana obtain better value for money on their projects.
Resumo:
Increasingly, the UK’s Private Finance Initiative has created a demand for construction companies to transfer knowledge from one organization or project to another. Knowledge transfer processes in such contexts face many challenges, due to the many resulting discontinuities in the involvement of organisations, personnel and information flow. This paper empirically identifies the barriers and enablers that hinder or enhance the transfer of knowledge in PFI contexts, drawing upon a questionnaire survey of construction firms. The main findings show that knowledge transfer processes in PFIs are hindered by time constraints, lack of trust, and policies, procedures, rules and regulations attached to the projects. Nevertheless, the processes of knowledge transfer are enhanced by emphasising the value and importance of a supportive leadership, participation/commitment from the relevant parties, and good communication between the relevant parties. The findings have considerable relevance to understanding the mechanism of knowledge transfer between organizations, projects and individuals within the PFI contexts in overcoming the barriers and enhancing the enablers. Furthermore, practitioners and managers can use the findings to efficiently design knowledge transfer frameworks that can be used to overcome the barriers encountered while enhancing the enablers to improve knowledge transfer processes.