976 resultados para Work flow
Resumo:
G-Rex is light-weight Java middleware that allows scientific applications deployed on remote computer systems to be launched and controlled as if they are running on the user's own computer. G-Rex is particularly suited to ocean and climate modelling applications because output from the model is transferred back to the user while the run is in progress, which prevents the accumulation of large amounts of data on the remote cluster. The G-Rex server is a RESTful Web application that runs inside a servlet container on the remote system, and the client component is a Java command line program that can easily be incorporated into existing scientific work-flow scripts. The NEMO and POLCOMS ocean models have been deployed as G-Rex services in the NERC Cluster Grid, and G-Rex is the core grid middleware in the GCEP and GCOMS e-science projects.
Resumo:
Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them usually involves complicated workflows implemented as shell scripts. For example, NEMO (Smith et al. 2008) is a state-of-the-art ocean model that is used currently for operational ocean forecasting in France, and will soon be used in the UK for both ocean forecasting and climate modelling. On a typical modern cluster, a particular one year global ocean simulation at 1-degree resolution takes about three hours when running on 40 processors, and produces roughly 20 GB of output as 50000 separate files. 50-year simulations are common, during which the model is resubmitted as a new job after each year. Running NEMO relies on a set of complicated shell scripts and command utilities for data pre-processing and post-processing prior to job resubmission. Grid Remote Execution (G-Rex) is a pure Java grid middleware system that allows scientific applications to be deployed as Web services on remote computer systems, and then launched and controlled as if they are running on the user's own computer. Although G-Rex is general purpose middleware it has two key features that make it particularly suitable for remote execution of climate models: (1) Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model; (2) The client component is a command-line program that can easily be incorporated into existing model work-flow scripts. G-Rex has a REST (Fielding, 2000) architectural style, which allows client programs to be very simple and lightweight and allows users to interact with model runs using only a basic HTTP client (such as a Web browser or the curl utility) if they wish. This design also allows for new client interfaces to be developed in other programming languages with relatively little effort. The G-Rex server is a standard Web application that runs inside a servlet container such as Apache Tomcat and is therefore easy to install and maintain by system administrators. G-Rex is employed as the middleware for the NERC1 Cluster Grid, a small grid of HPC2 clusters belonging to collaborating NERC research institutes. Currently the NEMO (Smith et al. 2008) and POLCOMS (Holt et al, 2008) ocean models are installed, and there are plans to install the Hadley Centre’s HadCM3 model for use in the decadal climate prediction project GCEP (Haines et al., 2008). The science projects involving NEMO on the Grid have a particular focus on data assimilation (Smith et al. 2008), a technique that involves constraining model simulations with observations. The POLCOMS model will play an important part in the GCOMS project (Holt et al, 2008), which aims to simulate the world’s coastal oceans. A typical use of G-Rex by a scientist to run a climate model on the NERC Cluster Grid proceeds as follows :(1) The scientist prepares input files on his or her local machine. (2) Using information provided by the Grid’s Ganglia3 monitoring system, the scientist selects an appropriate compute resource. (3) The scientist runs the relevant workflow script on his or her local machine. This is unmodified except that calls to run the model (e.g. with “mpirun”) are simply replaced with calls to "GRexRun" (4) The G-Rex middleware automatically handles the uploading of input files to the remote resource, and the downloading of output files back to the user, including their deletion from the remote system, during the run. (5) The scientist monitors the output files, using familiar analysis and visualization tools on his or her own local machine. G-Rex is well suited to climate modelling because it addresses many of the middleware usability issues that have led to limited uptake of grid computing by climate scientists. It is a lightweight, low-impact and easy-to-install solution that is currently designed for use in relatively small grids such as the NERC Cluster Grid. A current topic of research is the use of G-Rex as an easy-to-use front-end to larger-scale Grid resources such as the UK National Grid service.
Resumo:
Smooth flow of production in construction is hampered by disparity between individual trade teams' goals and the goals of stable production flow for the project as a whole. This is exacerbated by the difficulty of visualizing the flow of work in a construction project. While the addresses some of the issues in Building information modeling provides a powerful platform for visualizing work flow in control systems that also enable pull flow and deeper collaboration between teams on and off site. The requirements for implementation of a BIM-enabled pull flow construction management software system based on the Last Planner System™, called ‘KanBIM’, have been specified, and a set of functional mock-ups of the proposed system has been implemented and evaluated in a series of three focus group workshops. The requirements cover the areas of maintenance of work flow stability, enabling negotiation and commitment between teams, lean production planning with sophisticated pull flow control, and effective communication and visualization of flow. The evaluation results show that the system holds the potential to improve work flow and reduce waste by providing both process and product visualization at the work face.
Resumo:
This paper addresses the requirements for a Work/flow Management System that is intended to automate the production and distribution chain for cross-media content which is by nature multi-partner and multi-site. It advocates the requirements for an ontology-based object lifecycle tracking within work/flow integration by identifying various types of interfaces, object life cycles and the work-flow interaction environments within the AXMEDIS Framework.
Resumo:
Genetic algorithms (GAs) have been introduced into site layout planning as reported in a number of studies. In these studies, the objective functions were defined so as to employ the GAs in searching for the optimal site layout. However, few studies have been carried out to investigate the actual closeness of relationships between site facilities; it is these relationships that ultimately govern the site layout. This study has determined that the underlying factors of site layout planning for medium-size projects include work flow, personnel flow, safety and environment, and personal preferences. By finding the weightings on these factors and the corresponding closeness indices between each facility, a closeness relationship has been deduced. Two contemporary mathematical approaches - fuzzy logic theory and an entropy measure - were adopted in finding these results in order to minimize the uncertainty and vagueness of the collected data and improve the quality of the information. GAs were then applied to searching for the optimal site layout in a medium-size government project using the GeneHunter software. The objective function involved minimizing the total travel distance. An optimal layout was obtained within a short time. This reveals that the application of GA to site layout planning is highly promising and efficient.
Resumo:
This work deals with the development of a numerical technique for simulating three-dimensional viscoelastic free surface flows using the PTT (Phan-Thien-Tanner) nonlinear constitutive equation. In particular, we are interested in flows possessing moving free surfaces. The equations describing the numerical technique are solved by the finite difference method on a staggered grid. The fluid is modelled by a Marker-and-Cell type method and an accurate representation of the fluid surface is employed. The full free surface stress conditions are considered. The PTT equation is solved by a high order method, which requires the calculation of the extra-stress tensor on the mesh contours. To validate the numerical technique developed in this work flow predictions for fully developed pipe flow are compared with an analytic solution from the literature. Then, results of complex free surface flows using the FIT equation such as the transient extrudate swell problem and a jet flowing onto a rigid plate are presented. An investigation of the effects of the parameters epsilon and xi on the extrudate swell and jet buckling problems is reported. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This thesis work has introduced process orientation at the printing company Color Print Sweden AB. The outcome ofthis work is a survey of the work flow at the prepress department. To visualise the production process at the companya comprehensive mapping of the main process, order-to-delivery, has been made. The work has detected a couple ofcritical elements in the existing process. These elements are the following: initial check of material delivered to the prepressdepartment as well as the control made of the plotter print-out, digital test print and plate. To guarantee the qualityof the prints it is very important that the work made in the prepress department is optimized. This survey can thereforebe used as a basis for continuous improvement in the processes at Color Print Sweden AB. This work has alsoresulted in a template that Color Print Sweden AB can use to design routine descriptions to ensure that specific worktasks are performed the right way by everyone and all the time.
Resumo:
The way to deal with information assets means nowadays the main factor not only for the success but also for keeping the companies in the global world. The number of information security incidents has grown for the last years. The establishment of information security policies that search to keep the security requirements of assets in the desired degrees is the major priority for the companies. This dissertation suggests a unified process for elaboration, maintenance and development of information security policies, the Processo Unificado para Políticas de Segurança da Informação - PUPSI. The elaboration of this proposal started with the construction of a structure of knowledge based on documents and official rules, published in the last two decades, about security policies and information security. It's a model based on the examined documents which defines the needed security policies to be established in the organization, its work flow and identifies the sequence of hierarchy among them. It's also made a model of the entities participating in the process. Being the problem treated by the model so complex, which involves all security policies that the company must have. PUPSI has an interative and developing approach. This approach was obtained from the instantiation of the RUP - Rational Unified Process model. RUP is a platform for software development object oriented, of Rational Software (IBM group). Which uses the best practice known by the market. PUPSI got from RUP a structure of process that offers functionality, diffusion capacity and comprehension, performance and agility for the process adjustment, offering yet capacity of adjustment to technological and structural charges of the market and the company
Resumo:
Pós-graduação em Engenharia Mecânica - FEIS
Resumo:
Este trabalho apresenta resultados práticos de uma atenção sistemática dada ao processamento e à interpretação sísmica de algumas linhas terrestres do conjunto de dados do gráben do Tacutu (Brasil), sobre os quais foram aplicadas etapas fundamentais do sistema WIT de imageamento do empilhamento CRS (Superfície de Reflexão Comum) vinculado a dados. Como resultado, esperamos estabelecer um fluxograma para a reavaliação sísmica de bacias sedimentares. Fundamentado nos atributos de frente de onda resultantes do empilhamento CRS, um macro-modelo suave de velocidades foi obtido através de inversão tomográfica. Usando este macro-modelo, foi realizado uma migração à profundidade pré- e pós-empilhamento. Além disso, outras técnicas baseadas no empilhamento CRS foram realizadas em paralelo como correção estática residual e migração de abertura-limitada baseada na zona de Fresnel projetada. Uma interpretação geológica sobre as seções empilhadas e migradas foi esboçada. A partir dos detalhes visuais dos painéis é possível interpretar desconformidades, afinamentos, um anticlinal principal falhado com conjuntos de horstes e grábens. Também, uma parte da linha selecionada precisa de processamento mais detalhado para evidenciar melhor qualquer estrutura presente na subsuperfície.
Resumo:
The term "Brain Imaging" identi�es a set of techniques to analyze the structure and/or functional behavior of the brain in normal and/or pathological situations. These techniques are largely used in the study of brain activity. In addition to clinical usage, analysis of brain activity is gaining popularity in others recent �fields, i.e. Brain Computer Interfaces (BCI) and the study of cognitive processes. In this context, usage of classical solutions (e.g. f MRI, PET-CT) could be unfeasible, due to their low temporal resolution, high cost and limited portability. For these reasons alternative low cost techniques are object of research, typically based on simple recording hardware and on intensive data elaboration process. Typical examples are ElectroEncephaloGraphy (EEG) and Electrical Impedance Tomography (EIT), where electric potential at the patient's scalp is recorded by high impedance electrodes. In EEG potentials are directly generated from neuronal activity, while in EIT by the injection of small currents at the scalp. To retrieve meaningful insights on brain activity from measurements, EIT and EEG relies on detailed knowledge of the underlying electrical properties of the body. This is obtained from numerical models of the electric �field distribution therein. The inhomogeneous and anisotropic electric properties of human tissues make accurate modeling and simulation very challenging, leading to a tradeo�ff between physical accuracy and technical feasibility, which currently severely limits the capabilities of these techniques. Moreover elaboration of data recorded requires usage of regularization techniques computationally intensive, which influences the application with heavy temporal constraints (such as BCI). This work focuses on the parallel implementation of a work-flow for EEG and EIT data processing. The resulting software is accelerated using multi-core GPUs, in order to provide solution in reasonable times and address requirements of real-time BCI systems, without over-simplifying the complexity and accuracy of the head models.
Resumo:
Several practical obstacles in data handling and evaluation complicate the use of quantitative localized magnetic resonance spectroscopy (qMRS) in clinical routine MR examinations. To overcome these obstacles, a clinically feasible MR pulse sequence protocol based on standard available MR pulse sequences for qMRS has been implemented along with newly added functionalities to the free software package jMRUI-v5.0 to make qMRS attractive for clinical routine. This enables (a) easy and fast DICOM data transfer from the MR console and the qMRS-computer, (b) visualization of combined MR spectroscopy and imaging, (c) creation and network transfer of spectroscopy reports in DICOM format, (d) integration of advanced water reference models for absolute quantification, and (e) setup of databases containing normal metabolite concentrations of healthy subjects. To demonstrate the work-flow of qMRS using these implementations, databases for normal metabolite concentration in different regions of brain tissue were created using spectroscopic data acquired in 55 normal subjects (age range 6-61 years) using 1.5T and 3T MR systems, and illustrated in one clinical case of typical brain tumor (primitive neuroectodermal tumor). The MR pulse sequence protocol and newly implemented software functionalities facilitate the incorporation of qMRS and reference to normal value metabolite concentration data in daily clinical routine. Magn Reson Med, 2013. © 2012 Wiley Periodicals, Inc.
Resumo:
OBJECTIVES Sonographic guidance for peripheral nerve anesthesia has proven increasingly successful in clinical practice; however, fears that a change to sonographically guided regional anesthesia may impair the block quality and operating room work flow persist in certain units. In this retrospective cohort study, block quality and patient satisfaction during the transition period from nerve stimulator to sonographic guidance for axillary brachial plexus anesthesia in a tertiary referral center were investigated. METHODS Anesthesia records of all patients who had elective surgery of the wrist or hand during the transition time (September 1, 2006-August 25, 2007) were reviewed for block success, placement time, anesthesiologist training level, local anesthetic volume, and requirement of additional analgesics. Postoperative records were reviewed, and patient satisfaction was assessed by telephone interviews in matched subgroups. RESULTS Of 415 blocks, 341 were sonographically guided, and 74 were nerve stimulator guided. Sonographically guided blocks were mostly performed by novices, whereas nerve stimulator-guided blocks were performed by advanced users (72.3% versus 14%; P < .001). Block performance times and success rates were similar in both groups. In sonographically guided blocks, significantly less local anesthetics were applied compared to nerve stimulator-guided blocks (mean ± SD, 36.1 ± 7.1 versus 43.9 ± 6.1 mL; P< .001), and less opioids were required (fentanyl, 66.1 ± 30 versus 90 ± 62 μg; P< .001). Interviewed patients reported significantly less procedure-related discomfort, pain, and prolonged procedure time when block placement was sonographically guided (2% versus 20%; P = .002). CONCLUSIONS Transition from nerve stimulator to sonographic guidance for axillary brachial plexus blocks did not change block performance times or success rates. Patient satisfaction was improved even during the early institutional transition period.
Resumo:
BACKGROUND Patient-to-image registration is a core process of image-guided surgery (IGS) systems. We present a novel registration approach for application in laparoscopic liver surgery, which reconstructs in real time an intraoperative volume of the underlying intrahepatic vessels through an ultrasound (US) sweep process. METHODS An existing IGS system for an open liver procedure was adapted, with suitable instrument tracking for laparoscopic equipment. Registration accuracy was evaluated on a realistic phantom by computing the target registration error (TRE) for 5 intrahepatic tumors. The registration work flow was evaluated by computing the time required for performing the registration. Additionally, a scheme for intraoperative accuracy assessment by visual overlay of the US image with preoperative image data was evaluated. RESULTS The proposed registration method achieved an average TRE of 7.2 mm in the left lobe and 9.7 mm in the right lobe. The average time required for performing the registration was 12 minutes. A positive correlation was found between the intraoperative accuracy assessment and the obtained TREs. CONCLUSIONS The registration accuracy of the proposed method is adequate for laparoscopic intrahepatic tumor targeting. The presented approach is feasible and fast and may, therefore, not be disruptive to the current surgical work flow.