995 resultados para Program execution
Resumo:
El principal problema que impide actualmente una mayor utilización de las máquinas paralelas es la falta de herramientas de programación que permitan generar programas transportables a máquinas con diferentes prestaciones. En este trabajo se ha estudiado si los lenguajes con paralelismo explícito cumplen este requisito y son, por lo tanto, adecuados para programar este tipo de máquinas. El exceso de paralelismo, esto es, el uso de mayor paralelismo en el programa que el proporcionado por la máquina para esconder la latencia en la comunicación, se presenta en este trabajo como una solución a los problemas de eficiencia de los programas con paralelismo explícito cuando se ejecutan en máquinas que no tienen una granularidad adecuada. Con esta técnica, por lo tanto, los programas escritos con estos lenguajes pueden transportarse con eficiencia a diferentes máquinas. Para llevar a cabo el estudio de los lenguajes con paralelismo explícito, se ha desarrollado un modelo abstracto de paralelismo, en el cual un sistema está formado por una jerarquía de máquinas virtuales paralelas. Este modelo permite realizar un análisis genérico de la implementación de este tipo de lenguajes, ya sea sobre una máquina con sistema operativo o directamente sobre la máquina física. Este análisis genérico se ha aplicado a un lenguaje de este tipo, el lenguaje Ada. Se han estudiado las características específicas de Ada que pueden influir en la implementación eficiente del lenguaje, analizando también la propuesta de modificación del lenguaje correspondiente al proceso de revisión Ada 9X. Dentro del marco del modelo de paralelismo, se analiza también la problemática específica de las implementaciones del lenguaje sobre el sistema operativo. En este tipo de implementaciones, las interacciones de un programa con el entorno externo pueden causar ciertos problemas, como el bloqueo del proceso correspondiente del sistema operativo, que disminuyen el rendimiento del programa. Se analizan estos problemas y se proponen soluciones a los mismos. Se desarrolla en profundidad un ejemplo de este tipo de problemas: El acceso al estándar gráfico GKS desde Ada.---ABSTRACT---The major obstacle to the widespread utilization of the parallel machines is the lack of programming tools allowing the development of software portable between machines with different performance. This dissertation analyzes whether languages with explicit parallelism fulfil this requirement. The approach of using programs with more parallelism than available on the machine (parallel slackness) is presented. This technique can solve the efficiency problems appearing in the execution of programs with explicit parallelism over machines with a too coarse granularity. Therefore, with this approach programs can run efficiently on different machines. A new abstract model of parallelism allowing the generic study of the implementation of languages with explicit parallelism is developed. In this model, a parallel system is described by a hierarchy of parallel virtual machines. This generic analysis is applied to Ada language. Ada specific features with problematic implementation are identified and analyzed. The change proposals to Ada language in the frame of Ada 9X revisión process are also analyzed. The specific problematic of the language implementation on top of the operating system is studied under the scope of the parallelism model. With this kind of implementation, program interactions with extemal environments can lead to problems, like the blocking of the corresponding operating system process, decreasing the program execution performance. A practical example of this kind of problems, the access to GKS (Graphic Kernel System) from Ada programs, is analyzed and the implemented solution is described.
Resumo:
Concurrent software executes multiple threads or processes to achieve high performance. However, concurrency results in a huge number of different system behaviors that are difficult to test and verify. The aim of this dissertation is to develop new methods and tools for modeling and analyzing concurrent software systems at design and code levels. This dissertation consists of several related results. First, a formal model of Mondex, an electronic purse system, is built using Petri nets from user requirements, which is formally verified using model checking. Second, Petri nets models are automatically mined from the event traces generated from scientific workflows. Third, partial order models are automatically extracted from some instrumented concurrent program execution, and potential atomicity violation bugs are automatically verified based on the partial order models using model checking. Our formal specification and verification of Mondex have contributed to the world wide effort in developing a verified software repository. Our method to mine Petri net models automatically from provenance offers a new approach to build scientific workflows. Our dynamic prediction tool, named McPatom, can predict several known bugs in real world systems including one that evades several other existing tools. McPatom is efficient and scalable as it takes advantage of the nature of atomicity violations and considers only a pair of threads and accesses to a single shared variable at one time. However, predictive tools need to consider the tradeoffs between precision and coverage. Based on McPatom, this dissertation presents two methods for improving the coverage and precision of atomicity violation predictions: 1) a post-prediction analysis method to increase coverage while ensuring precision; 2) a follow-up replaying method to further increase coverage. Both methods are implemented in a completely automatic tool.
Resumo:
Para garantir a qualidade nos cuidados de saúde é necessário conhecer as principais componentes do conceito de qualidade, elaborar um programa de garantia da qualidade, avaliar de uma forma sistemática a execução do programa e definir o modelo conceptual a aplicar. A prevenção das úlceras de pressão é uma preocupação dos profissionais de saúde que prestam cuidados aos idosos dependentes de cuidadores informais, sendo a sua prevenção um desafio para a equipa de enfermagem, uma vez que a incidência de úlcera de pressão é frequente nestes. A aposta na prevenção e tratamento da UP terá um efeito positivo na qualidade de cuidados prestados. Tendo em conta esta problemática, foi feito o diagnóstico de situação com base na observação dos registos de enfermagem, bem como os procedimentos inerentes à prevenção e tratamento de úlceras de pressão, com a finalidade de contribuir para a implementação de um programa de melhoria contínua da qualidade dos cuidados a idosos dependentes de cuidadores informais com risco de úlceras de pressão. Após esta fase definiram-se algumas estratégias que consideramos pertinentes implementar. Com as atividades desenvolvidas neste trabalho, esperamos melhorar a informação produzida conseguindo obter dados que permitem melhorar os cuidados de enfermagem aos idosos/família com risco de UP, assim como, contribuir para a identificação de problemas e definições de estratégias de melhoria no futuro.
Resumo:
Concurrent software executes multiple threads or processes to achieve high performance. However, concurrency results in a huge number of different system behaviors that are difficult to test and verify. The aim of this dissertation is to develop new methods and tools for modeling and analyzing concurrent software systems at design and code levels. This dissertation consists of several related results. First, a formal model of Mondex, an electronic purse system, is built using Petri nets from user requirements, which is formally verified using model checking. Second, Petri nets models are automatically mined from the event traces generated from scientific workflows. Third, partial order models are automatically extracted from some instrumented concurrent program execution, and potential atomicity violation bugs are automatically verified based on the partial order models using model checking. Our formal specification and verification of Mondex have contributed to the world wide effort in developing a verified software repository. Our method to mine Petri net models automatically from provenance offers a new approach to build scientific workflows. Our dynamic prediction tool, named McPatom, can predict several known bugs in real world systems including one that evades several other existing tools. McPatom is efficient and scalable as it takes advantage of the nature of atomicity violations and considers only a pair of threads and accesses to a single shared variable at one time. However, predictive tools need to consider the tradeoffs between precision and coverage. Based on McPatom, this dissertation presents two methods for improving the coverage and precision of atomicity violation predictions: 1) a post-prediction analysis method to increase coverage while ensuring precision; 2) a follow-up replaying method to further increase coverage. Both methods are implemented in a completely automatic tool.
Resumo:
The advantages of tabled evaluation regarding program termination and reduction of complexity are well known —as are the significant implementation, portability, and maintenance efforts that some proposals (especially those based on suspensión) require. This implementation effort is reduced by program transformation-based continuation cali techniques, at some eñrciency cost. However, the traditional formulation of this proposal by Ramesh and Cheng limits the interleaving of tabled and non-tabled predicates and thus cannot be used as-is for arbitrary programs. In this paper we present a complete translation for the continuation cali technique which, using the runtime support needed for the traditional proposal, solves these problems and makes it possible to execute arbitrary tabled programs. We present performance results which show that CCall offers a useful tradeoff that can be competitive with state-of-the-art implementations.
Resumo:
Background: Traffic accidents constitute the main cause of death in the first decades of life. Traumatic brain injury is the event most responsible for the severity of these accidents. The SBN started an educational program for the prevention of traffic accidents, adapted from the American model ""Think First"" to the Brazilian environment, since 1995, with special effort devoted to the prevention of TBI by using seat belts and motorcycle helmets. The objective of the present study was to set up a traffic accident prevention program based on the adapted Think First and to evaluate its impact by comparing epidemiological variables before and after the beginning of the program. Methods: The program was executed in Maringa city, from September 2004 to August 2005, with educational actions targeting the entire population, especially teenagers and young adults. The program was implemented by building a network of information facilitators and multipliers inside the organized civil society, with widespread population dissemination. To measure the impact of the program, a specific software was developed for the storage and processing of the epidemiological variables. Results: The results showed a reduction of trauma severity due to traffic accidents after the execution of the program, mainly TBI. Conclusions: The adapted Think First was systematically implemented and its impact measured for the first time in Brazil, revealing the usefulness of the program for reducing trauma and TBI severity in traffic accidents through public education and representing a standardized model of implementation in a developing country. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
This paper presents an algorithm to efficiently generate the state-space of systems specified using the IOPT Petri-net modeling formalism. IOPT nets are a non-autonomous Petri-net class, based on Place-Transition nets with an extended set of features designed to allow the rapid prototyping and synthesis of system controllers through an existing hardware-software co-design framework. To obtain coherent and deterministic operation, IOPT nets use a maximal-step execution semantics where, in a single execution step, all enabled transitions will fire simultaneously. This fact increases the resulting state-space complexity and can cause an arc "explosion" effect. Real-world applications, with several million states, will reach a higher order of magnitude number of arcs, leading to the need for high performance state-space generator algorithms. The proposed algorithm applies a compilation approach to read a PNML file containing one IOPT model and automatically generate an optimized C program to calculate the corresponding state-space.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do Grau de Mestre em Engenharia Informática.
Resumo:
Dissertação para obtenção do Grau de Mestre em Lógica Computacional
Resumo:
Un reto al ejecutar las aplicaciones en un cluster es lograr mejorar las prestaciones utilizando los recursos de manera eficiente, y este reto es mayor al utilizar un ambiente distribuido. Teniendo en cuenta este reto, se proponen un conjunto de reglas para realizar el cómputo en cada uno de los nodos, basado en el análisis de cómputo y comunicaciones de las aplicaciones, se analiza un esquema de mapping de celdas y un método para planificar el orden de ejecución, tomando en consideración la ejecución por prioridad, donde las celdas de fronteras tienen una mayor prioridad con respecto a las celdas internas. En la experimentación se muestra el solapamiento del computo interno con las comunicaciones de las celdas fronteras, obteniendo resultados donde el Speedup aumenta y los niveles de eficiencia se mantienen por encima de un 85%, finalmente se obtiene ganancias de los tiempos de ejecución, concluyendo que si se puede diseñar un esquemas de solapamiento que permita que la ejecución de las aplicaciones SPMD en un cluster se hagan de forma eficiente.
Resumo:
Performance prediction and application behavior modeling have been the subject of exten- sive research that aim to estimate applications performance with an acceptable precision. A novel approach to predict the performance of parallel applications is based in the con- cept of Parallel Application Signatures that consists in extract an application most relevant parts (phases) and the number of times they repeat (weights). Executing these phases in a target machine and multiplying its exeuction time by its weight an estimation of the application total execution time can be made. One of the problems is that the performance of an application depends on the program workload. Every type of workload affects differently how an application performs in a given system and so affects the signature execution time. Since the workloads used in most scientific parallel applications have dimensions and data ranges well known and the behavior of these applications are mostly deterministic, a model of how the programs workload affect its performance can be obtained. We create a new methodology to model how a program’s workload affect the parallel application signature. Using regression analysis we are able to generalize each phase time execution and weight function to predict an application performance in a target system for any type of workload within predefined range. We validate our methodology using a synthetic program, benchmarks applications and well known real scientific applications.
Resumo:
To ensure successful treatment, HIV patients must maintain a high degree of medication adherence over time. Since August 2004, patients who are (or are at risk of) experiencing problems with their HIV antiretroviral therapy (ART) have been referred by their physicians to an interdisciplinary HIV-adherence program. The program consists of a multifactorial intervention along with electronic drug monitoring (MEMS(TM)). The pharmacists organize individualized semi-structured motivational interviews based on cognitive, emotional, behavioral, and social issues. At the end of each session, the patient brings an adherence report to the physician. This enables the physician to use the adherence results to evaluate the treatment plan. The aim of this study was to retrospectively analyze this on-going interdisciplinary HIV-adherence program. All patients who were included between August 2004 and the end of April 2008 were analyzed. One hundred and four patients were included (59% women, median age 39 (31.0, 46.0) years, 42% black ethnicity). Eighty (77%) patients were ART-experienced patients and 59% had a protease inhibitor-based treatment. The retention rate was high (92%) in the program. Patient inclusion in this HIV-adherence program was determined by patient issues for naive patients and by nonadherence or suboptimal clinical outcomes for ART-experienced patients. The median time spent by a subject at the pharmacy was 35 (25.0, 48.0) minutes, half for the medication handling and half for the interview. The adherence results showed a persistence of 87% and an execution of 88%. Proportion of undetectable subjects increased during study. In conclusion, retention and persistence rates were high in this highly selected problematic population.
Antiretroviral adherence program in HIV patients: a feasibility study in the Swiss HIV Cohort Study.
Resumo:
Objective To evaluate the feasibility of a comprehensive, interdisciplinary adherence program aimed at HIV patients. Setting Two centers of the Swiss HIV Cohort Study: Lausanne and Basel. Method 6-month, pilot, quasi-experimental, 2-arm design (control and intervention). Patients starting a first or second combined antiretroviral therapy line were invited to participate in the study. Patients entering the intervention arm were proposed a multifactorial intervention along with an electronic drug monitor. It consisted of a maximum of six 30-min sessions with the interventionist coinciding with routine HIV check-up. The sessions relied on individualized semi-structured motivational interviews. Patients in the control arm used directly blinded EDM and did not participate in motivational interviews. Main outcome measures Rate of patients' acceptance to take part in the HIV-adherence program and rate of patients' retention in this program assessed in both intervention and control groups. Persistence, execution and adherence. Results The study was feasible in one center but not in the other one. Hence, the control group previously planned in Basel was recruited in Lausanne. Inclusion rate was 84% (n = 21) in the intervention versus 52% (n = 11) in the control group (P = 0.027). Retention rate was 91% in the intervention versus 82% in the control group (P = ns). Regarding adherence, execution was high in both groups (97 vs. 95%). Interestingly, the statistical model showed that adherence decreased more quickly in the control versus the intervention group (interaction group × time P < 0.0001). Conclusion The encountered difficulties rely on the implementation, i.e., on the program and the health care system levels rather than on the patient level. Implementation needs to be evaluated further; to be feasible a new adherence program needs to fit into the daily routine of the centre and has to be supported by all trained healthcare providers. However, this study shows that patients' adherence behavior evolved differently in both groups; it decreased more quickly over time in the control than in the intervention group. RCTs are eventually needed to assess the clinical impact of such an adherence program and to verify whether skilled pharmacists can ensure continuity of care for HIV outpatients.
Resumo:
The international HyMeX (Hydrological Mediterranean Experiment) program aims to improve our understanding of the water cycle in the Mediterranean, using a multidisciplinary and multiscale approach and with emphasis on extreme events. This program will improve our understanding and our predictive ability of hydrometeorological hazards including their evolution within the next century. One of the most important results of the program will be its observational campaigns, which will greatly improve the data available, leading to significant scientific results. The interest of the program for the Spanish research groups is described, as the active participation of some of them in the design and execution of the observational activities. At the same time, due to its location, Spain is key to the program, being a good observation platform. HyMeX will enrich the work of the Spanish research groups, it will improve the predictive ability of the weather services, will help us to have a better understanding of the impacts of hydrometeorological extremes on our society and will lead to better strategies for adapting to climate change.
Resumo:
G-Rex is light-weight Java middleware that allows scientific applications deployed on remote computer systems to be launched and controlled as if they are running on the user's own computer. G-Rex is particularly suited to ocean and climate modelling applications because output from the model is transferred back to the user while the run is in progress, which prevents the accumulation of large amounts of data on the remote cluster. The G-Rex server is a RESTful Web application that runs inside a servlet container on the remote system, and the client component is a Java command line program that can easily be incorporated into existing scientific work-flow scripts. The NEMO and POLCOMS ocean models have been deployed as G-Rex services in the NERC Cluster Grid, and G-Rex is the core grid middleware in the GCEP and GCOMS e-science projects.