837 resultados para integrated-process model


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This dissertation focuses on Project HOPE, an American medical aid agency, and its work in Tunisia. More specifically this is a study of the implementation strategies of those HOPE sponsored projects and programs designed to solve the problems of high morbidity and infant mortality rates due to environmentally related diarrheal and enteric diseases. Several environmental health programs and projects developed in cooperation with Tunisian counterparts are described and analyzed. These include (1) a paramedical manpower training program; (2) a national hospital sanitation and infection control program; (3) a community sewage disposal project; (4) a well reconstruction project; and (5) a solid-waste disposal project for a hospital.^ After independence, Tunisia, like many developing countries, encountered several difficulties which hindered progress toward solving basic environmental health problems and prompted a request for aid. This study discusses the need for all who work in development programs to recognize and assess those difficulties or constraints which affect the program planning process, including those latent cultural and political constraints which not only exist within the host country but within the aid agency as well. For example, failure to recognize cultural differences may adversely affect the attitudes of the host staff towards their work and towards the aid agency and its task. These factors, therefore, play a significant role in influencing program development decisions and must be taken into account in order to maximize the probability of successful outcomes.^ In 1969 Project HOPE was asked by the Tunisian government to assist the Ministry of Health in solving its health manpower problems. HOPE responded with several programs, one of which concerned the training of public health nurses, sanitary technicians, and aids at Tunisia's school of public health in Nabeul. The outcome of that program as well as the strategies used in its development are analyzed. Also, certain questions are addressed such as, what should the indicators of success be, and when is the time right to phase out?^ Another HOPE program analyzed involved hospital sanitation and infection control. Certain generic aspects of basic hospital sanitation procedures were documented and presented in the form of a process model which was later used as a "microplan" in setting up similar programs in other Tunisian hospitals. In this study the details of the "microplan" are discussed. The development of a nation-wide program without any further need of external assistance illustrated the success of HOPE's implementation strategies.^ Finally, although it is known that the high incidence of enteric disease in developing countries is due to poor environmental sanitation and poor hygiene practices, efforts by aid agencies to correct these conditions have often resulted in failure. Project HOPE's strategy was to maximize limited resources by using a systems approach to program development and by becoming actively involved in the design and implementation of environmental health projects utilizing "appropriate" technology. Three innovative projects and their implementation strategies (including technical specifications) are described.^ It is advocated that if aid agencies are to make any progress in helping developing countries basic sanitation problems, they must take an interdisciplinary approach to progrm development and play an active role in helping counterparts seek and identify appropriate technologies which are socially and economically acceptable. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pneumonia is a well-documented and common respiratory infection in patients with acute traumatic spinal cord injuries, and may recur during the course of acute care. Using data from the North American Clinical Trials Network (NACTN) for Spinal Cord Injury, the incidence, timing, and recurrence of pneumonia were analyzed. The two main objectives were (1) to investigate the time and potential risk factors for the first occurrence of pneumonia using the Cox Proportional Hazards model, and (2) to investigate pneumonia recurrence and its risk factors using a Counting Process model that is a generalization of the Cox Proportional Hazards model. The results from survival analysis suggested that surgery, intubation, American Spinal Injury Association (ASIA) grade, direct admission to a NACTN site and age (older than 65 or not) were significant risks for first event of pneumonia and multiple events of pneumonia. The significance of this research is that it has the potential to identify patients at the time of admission who are at high risk for the incidence and recurrence of pneumonia. Knowledge and the time of occurrence of pneumonias are important factors for the development of prevention strategies and may also provide some insights into the selection of emerging therapies that compromise the immune system. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Late Cretaceous (Maastrichtian)-Quaternary summary biostratigraphies are presented for Ocean Drilling Program (ODP) Leg 189 Sites 1168 (West Tasmanian Margin), 1170 and 1171 (South Tasman Rise), and 1172 (East Tasman Plateau). The age models are calibrated to magnetostratigraphy and integrate both calcareous (planktonic foraminifers and nannofossils) and siliceous (diatoms and radiolarians) microfossil groups with organic walled microfossils (organic walled dinoflagellate cysts, or dinocysts). We also incorporate benthic oxygen isotope stratigraphies into the upper Quaternary parts of the age models for further control. The purpose of this paper is to provide a summary age-depth model for all deep-penetrating sites of Leg 189 incorporating updated shipboard biostratigraphic data with new information obtained during the 3 yr since the cruise. In this respect we provide a report of work to November 2003, not a final synthesis of the biomagnetostratigraphy of Leg 189, yet we present the most complete integrated age model for these sites at this time. Detailed information of the stratigraphy of individual fossil groups, paleomagnetism, and isotope data are presented elsewhere. Ongoing efforts aim toward further integration of age information for Leg 189 sites and will include an attempt to correlate zonation schemes for all the major microfossil groups and detailed correlation between all sites.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The knowledge about processes concerning perception and understanding is of paramount importance for designing means of communication like maps and charts. This is especially the case, if one does not want to lose sight of the map-user and if map-design is to be orientated along the map-users needs and preferences in order to improve the cartographic product's usability. A scientific approach to visualization can help to achieve useable results. The insights achieved by such an approach can lead to modes of visualization that are superior to those, which have seemingly proved their value in praxis - so-called "bestpractices" -, concerning their utility and efficiency. This thesis shows this by using the example of visualizing the limits of bodies of waters in the Southern Ocean. After making some introductorily remarks on the chosen mode of problem-solution in chapter one, which simultaneously illustrate the flow of work while working on the problem, in chapter two the relevant information concerning the drawing of limits in the Southern Ocean is outlined. Chapter 3 builds the theoretical framework, which is a multidisciplinary approach to representation. This theoretical framework is based on "How Maps Work" by the American Cartographer MacEachren (1995/2004). His "scientific approach to visualization" is amended and adjusted by the knowledge gained from recent findings of the social sciences where necessary. So, the approach suggested in this thesis represents a synergy of psychology, sociology, semiotics, linguistics, communication theory and cartography. It follows the tradition of interdisciplinary research getting over the boundaries of a single scientific subject. The achieved holistic approach can help to improve the usability of cartographic products. It illustrates on the one hand those processes taking place while perceiving and recognizing cartographic information - so-called bottom-up-processes. On the other hand it illuminates the processes which happen during understanding this information in so-called top-down-processes. Bottom-up- and top-down-processes are interdependent and inseparably interrelated and therefore cannot be understood without each other. Regarding aspects of usability the approach suggested in this thesis strongly focuses on the map-user. This is the reason why the phenomenon of communication gains more weight than in MacEachren's map-centered approach. Because of this, in chapter 4 a holistic approach to communication is developed. This approach makes clear that only the map-user can evaluate the usability of a cartographic product. Only if he can extract the information relevant for him from the cartographical product, it is really useable. The concept of communication is well suited to conceive that. In case of the visualization of limits of bodies of water in the Southern Ocean, which is not complex enough to illustrate all results of the theoretical considerations, it is suggested to visualize the limits with red lines. This suggestion deviates from the commonly used mode of visualization. So, this thesis shows how theory is able to ameliorate praxis. Chapter 5 leads back to the task of fixing limits of the bodies of water in the area of concern. A convention by the International Hydrographic Organization (IHO) states that those limits should be drawn by using meridians, parallels, rhumb lines and bathymetric data. Based on the available bathymetric data both a representation and a process model are calculated, which should support the drawing of the limits. The quality of both models, which depends on the quality of the bathymetric data at hand, leads to the decision that the representation model is better suited to support the drawing of limits.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La presente tesis analiza la integración del sector de las telecomunicaciones con los de TI y medios que conforman el actual hiper-sector de las TIC, para abordar una propuesta de valor que se plantea a dos niveles. Se expone de un lado, la iniciativa WIMS 2.0, que aborda los aspectos tecnológicos y estratégicos de la convergencia telco e Internet para, posteriormente, definir un nuevo modelo de negocio, que adaptado al nuevo sector integrado y siguiendo paradigmas inéditos como los que plantea la innovación abierta, permita generar nuevos flujos de ingresos en áreas no habituales para los operadores de telecomunicaciones. A lo largo del capítulo 2, el lector encontrará la contextualización del entorno de las comunicaciones de banda ancha desde tres vertientes: los aspectos tecnológicos, los económicos y el mercado actual, todo ello enfocado en una dimensión nacional, europea y mundial. Se establece de esta manera las bases para el desarrollo de los siguientes capítulos al demostrar cómo la penetración de la banda ancha ha potenciado el desarrollo de un nuevo sistema de valor en el sector integrado de las TIC, alrededor del cual surgen propuestas de modelos de negocio originales que se catalogan en una taxonomía propia. En el tercer capítulo se detalla la propuesta de valor de la iniciativa WIMS 2.0, fundada y liderada por el autor de esta tesis. WIMS 2.0, como iniciativa abierta, se dirige a la comunidad como una propuesta de un nuevo ecosistema y como un modelo de referencia integrado sobre el que desplegar servicios convergentes. Adicionalmente, sobre el planteamiento teórico definido se aporta el enfoque práctico que supone el despliegue del modelo de referencia dentro de la arquitectura de un operador como Telefónica. El capítulo 4 muestra el modelo de negocio Innovación 2.0, basado en la innovación abierta con el objetivo de capturar nuevos flujos de ingresos incrementando el portfolio de servicios innovadores gracias a las ideas frescas y brillantes de start-ups. Innovación 2.0 lejos de quedarse en una mera propuesta teórica, muestra sus bondades en el éxito práctico en el mercado que ha validado las hipótesis planteadas. El último capítulo plantea las líneas futuras de investigación tanto en el ámbito de la iniciativa WIMS 2.0 como en el modelo de Innovación 2.0, algunas de las cuales se están comenzando a abordar. 3 Abstract This thesis examines the integration of telecommunications sector with IT and media that make up the current hyper-ICT sector, to address a value proposition that arises at two levels. On one side, WIMS 2.0 initiative, which addresses the technological and strategic aspects of the telco and Internet convergence to later define a new business model, adapted to the new integrated sector and following original paradigms such as those posed by open innovation, which generates new revenue streams in areas not typical for telecom operators. Throughout Chapter 2, the reader will find the contextualization of the broadband communications environment from three aspects: technological, economic and the current market all focused on a national, European and world scale. Thus it establishes the basis for the development of the following chapters by demonstrating how the penetration of broadband has led to the development of a new value system in the integrated sector of the ICT, around which arise proposals of originals business models, which are categorized in a own taxonomy. The third chapter outlines the value proposition of the WIMS 2.0 initiative, founded and led by the author of this thesis. WIMS 2.0, as open initiative, presents to the community a proposal for a new ecosystem and an integrated reference model on which to deploy converged services. Additionally, above the theoretical approach defined, WIMS 2.0 provides the practical approach is provided which is the deployment of the reference model into the architecture of an operator such as Telefónica. Chapter 4 shows the Innovation 2.0 business model, based on open innovation with the goal of capturing new revenue streams by increasing the portfolio of innovative services thanks to the fresh and brilliant ideas from start-ups. Innovation 2.0, far from being a mere theoretical proposition, shows its benefits in the successful deployment in the market, which has validated the hypotheses. The last chapter sets out the future research at both the WIMS 2.0 initiative and Innovation 2.0 model, some of which are beginning to be addressed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this paper is to propose an integrated planning model to adequate the offered capacity and system frequencies to attend the increased passenger demand and traffic congestion around urban and suburban areas. The railway capacity is studied in line planning, however, these planned frequencies were obtained without accounting for rolling stock flows through the rapid transit network. In order to provide the problem more freedom to decide rolling stock flows and therefore better adjusting these flows to passenger demand, a new integrated model is proposed, where frequencies are readjusted. Then, the railway timetable and rolling stock assignment are also calculated, where shunting operations are taken into account. These operations may sometimes malfunction, causing localized incidents that could propagate throughout the entire network due to cascading effects. This type of operations will be penalized with the goal of selectively avoiding them and ameliorating their high malfunction probabilities. Swapping operations will also be ensured using homogeneous rolling stock material and ensuring parkings in strategic stations. We illustrate our model using computational experiments drawn from RENFE (the main Spanish operator of suburban passenger trains) in Madrid, Spain. The results show that through this integrated approach a greater robustness degree can be obtained

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Los avances en el hardware permiten disponer de grandes volúmenes de datos, surgiendo aplicaciones que deben suministrar información en tiempo cuasi-real, la monitorización de pacientes, ej., el seguimiento sanitario de las conducciones de agua, etc. Las necesidades de estas aplicaciones hacen emerger el modelo de flujo de datos (data streaming) frente al modelo almacenar-para-despuésprocesar (store-then-process). Mientras que en el modelo store-then-process, los datos son almacenados para ser posteriormente consultados; en los sistemas de streaming, los datos son procesados a su llegada al sistema, produciendo respuestas continuas sin llegar a almacenarse. Esta nueva visión impone desafíos para el procesamiento de datos al vuelo: 1) las respuestas deben producirse de manera continua cada vez que nuevos datos llegan al sistema; 2) los datos son accedidos solo una vez y, generalmente, no son almacenados en su totalidad; y 3) el tiempo de procesamiento por dato para producir una respuesta debe ser bajo. Aunque existen dos modelos para el cómputo de respuestas continuas, el modelo evolutivo y el de ventana deslizante; éste segundo se ajusta mejor en ciertas aplicaciones al considerar únicamente los datos recibidos más recientemente, en lugar de todo el histórico de datos. En los últimos años, la minería de datos en streaming se ha centrado en el modelo evolutivo. Mientras que, en el modelo de ventana deslizante, el trabajo presentado es más reducido ya que estos algoritmos no sólo deben de ser incrementales si no que deben borrar la información que caduca por el deslizamiento de la ventana manteniendo los anteriores tres desafíos. Una de las tareas fundamentales en minería de datos es la búsqueda de agrupaciones donde, dado un conjunto de datos, el objetivo es encontrar grupos representativos, de manera que se tenga una descripción sintética del conjunto. Estas agrupaciones son fundamentales en aplicaciones como la detección de intrusos en la red o la segmentación de clientes en el marketing y la publicidad. Debido a las cantidades masivas de datos que deben procesarse en este tipo de aplicaciones (millones de eventos por segundo), las soluciones centralizadas puede ser incapaz de hacer frente a las restricciones de tiempo de procesamiento, por lo que deben recurrir a descartar datos durante los picos de carga. Para evitar esta perdida de datos, se impone el procesamiento distribuido de streams, en concreto, los algoritmos de agrupamiento deben ser adaptados para este tipo de entornos, en los que los datos están distribuidos. En streaming, la investigación no solo se centra en el diseño para tareas generales, como la agrupación, sino también en la búsqueda de nuevos enfoques que se adapten mejor a escenarios particulares. Como ejemplo, un mecanismo de agrupación ad-hoc resulta ser más adecuado para la defensa contra la denegación de servicio distribuida (Distributed Denial of Services, DDoS) que el problema tradicional de k-medias. En esta tesis se pretende contribuir en el problema agrupamiento en streaming tanto en entornos centralizados y distribuidos. Hemos diseñado un algoritmo centralizado de clustering mostrando las capacidades para descubrir agrupaciones de alta calidad en bajo tiempo frente a otras soluciones del estado del arte, en una amplia evaluación. Además, se ha trabajado sobre una estructura que reduce notablemente el espacio de memoria necesario, controlando, en todo momento, el error de los cómputos. Nuestro trabajo también proporciona dos protocolos de distribución del cómputo de agrupaciones. Se han analizado dos características fundamentales: el impacto sobre la calidad del clustering al realizar el cómputo distribuido y las condiciones necesarias para la reducción del tiempo de procesamiento frente a la solución centralizada. Finalmente, hemos desarrollado un entorno para la detección de ataques DDoS basado en agrupaciones. En este último caso, se ha caracterizado el tipo de ataques detectados y se ha desarrollado una evaluación sobre la eficiencia y eficacia de la mitigación del impacto del ataque. ABSTRACT Advances in hardware allow to collect huge volumes of data emerging applications that must provide information in near-real time, e.g., patient monitoring, health monitoring of water pipes, etc. The data streaming model emerges to comply with these applications overcoming the traditional store-then-process model. With the store-then-process model, data is stored before being consulted; while, in streaming, data are processed on the fly producing continuous responses. The challenges of streaming for processing data on the fly are the following: 1) responses must be produced continuously whenever new data arrives in the system; 2) data is accessed only once and is generally not maintained in its entirety, and 3) data processing time to produce a response should be low. Two models exist to compute continuous responses: the evolving model and the sliding window model; the latter fits best with applications must be computed over the most recently data rather than all the previous data. In recent years, research in the context of data stream mining has focused mainly on the evolving model. In the sliding window model, the work presented is smaller since these algorithms must be incremental and they must delete the information which expires when the window slides. Clustering is one of the fundamental techniques of data mining and is used to analyze data sets in order to find representative groups that provide a concise description of the data being processed. Clustering is critical in applications such as network intrusion detection or customer segmentation in marketing and advertising. Due to the huge amount of data that must be processed by such applications (up to millions of events per second), centralized solutions are usually unable to cope with timing restrictions and recur to shedding techniques where data is discarded during load peaks. To avoid discarding of data, processing of streams (such as clustering) must be distributed and adapted to environments where information is distributed. In streaming, research does not only focus on designing for general tasks, such as clustering, but also in finding new approaches that fit bests with particular scenarios. As an example, an ad-hoc grouping mechanism turns out to be more adequate than k-means for defense against Distributed Denial of Service (DDoS). This thesis contributes to the data stream mining clustering technique both for centralized and distributed environments. We present a centralized clustering algorithm showing capabilities to discover clusters of high quality in low time and we provide a comparison with existing state of the art solutions. We have worked on a data structure that significantly reduces memory requirements while controlling the error of the clusters statistics. We also provide two distributed clustering protocols. We focus on the analysis of two key features: the impact on the clustering quality when computation is distributed and the requirements for reducing the processing time compared to the centralized solution. Finally, with respect to ad-hoc grouping techniques, we have developed a DDoS detection framework based on clustering.We have characterized the attacks detected and we have evaluated the efficiency and effectiveness of mitigating the attack impact.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper studies the disruption management problem of rapid transit rail networks. Besides optimizing the timetable and the rolling stock schedules, we explicitly deal with the effects of the disruption on the passenger demand. We propose a two-step approach that combines an integrated optimization model (for the timetable and rolling stock) with a model for the passengers’ behavior. We report our computational tests on realistic problem instances of the Spanish rail operator RENFE. The proposed approach is able to find solutions with a very good balance between various managerial goals within a few minutes. Se estudia la gestión de las incidencias en redes de metro y cercanías. Se optimizan los horarios y la asignación del material rodante, teniendo en cuenta el comportamiento de los pasajeros. Se reallizan pruebas en varias líneas de la red de cercanías de Madrid, con resultados satisfactorios.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper proposes an extension of methods used to predict the propagation of landslides having a long runout to smaller landslides with much shorter propagation distances. The method is based on: (1) a depth-integrated mathematical model including the coupling between the soil skeleton and the pore fluids, (2) suitable rheological models describing the relation between the stress and the rate of deformation tensors for fluidised soils and (3) a meshless numerical method, Smooth Particle Hydrodynamics, which separates the computational mesh (or set of computational nodes) from the mesh describing the terrain topography, which is of structured type – thus accelerating search operations. The proposed model is validated using two examples for which there are analytical solutions, and then it is applied to two short runout landslides which happened in Hong Kong in 1995, for which there is available information.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En la actualidad existe una gran expectación ante la introducción de nuevas herramientas y métodos para el desarrollo de productos software, que permitirán en un futuro próximo un planteamiento de ingeniería del proceso de producción software. Las nuevas metodologías que empiezan a esbozarse suponen un enfoque integral del problema abarcando todas las fases del esquema productivo. Sin embargo el grado de automatización conseguido en el proceso de construcción de sistemas es muy bajo y éste está centrado en las últimas fases del ciclo de vida del software, consiguiéndose así una reducción poco significativa de sus costes y, lo que es aún más importante, sin garantizar la calidad de los productos software obtenidos. Esta tesis define una metodología de desarrollo software estructurada que se puede automatizar, es decir una metodología CASE. La metodología que se presenta se ajusta al modelo de ciclo de desarrollo CASE, que consta de las fases de análisis, diseño y pruebas; siendo su ámbito de aplicación los sistemas de información. Se establecen inicialmente los principios básicos sobre los que la metodología CASE se asienta. Posteriormente, y puesto que la metodología se inicia con la fijación de los objetivos de la empresa que demanda un sistema informático, se emplean técnicas que sirvan de recogida y validación de la información, que proporcionan a la vez un lenguaje de comunicación fácil entre usuarios finales e informáticos. Además, estas mismas técnicas detallarán de una manera completa, consistente y sin ambigüedad todos los requisitos del sistema. Asimismo, se presentan un conjunto de técnicas y algoritmos para conseguir que desde la especificación de requisitos del sistema se logre una automatización tanto del diseño lógico del Modelo de Procesos como del Modelo de Datos, validados ambos conforme a la especificación de requisitos previa. Por último se definen unos procedimientos formales que indican el conjunto de actividades a realizar en el proceso de construcción y cómo llevarlas a cabo, consiguiendo de esta manera una integridad en las distintas etapas del proceso de desarrollo.---ABSTRACT---Nowdays there is a great expectation with regard to the introduction of new tools and methods for the software products development that, in the very near future will allow, an engineering approach in the software development process. New methodologies, just emerging, imply an integral approach to the problem, including all the productive scheme stages. However, the automatization degree obtained in the systems construction process is very low and focused on the last phases of the software lifecycle, which means that the costs reduction obtained is irrelevant and, which is more important, the quality of the software products is not guaranteed. This thesis defines an structured software development methodology that can be automated, that is a CASE methodology. Such a methodology is adapted to the CASE development cycle-model, which consists in analysis, design and testing phases, being the information systems its field of application. Firstly, we present the basic principies on which CASE methodology is based. Secondly, since the methodology starts from fixing the objectives of the company demanding the automatization system, we use some techniques that are useful for gathering and validating the information, being at the same time an easy communication language between end-users and developers. Indeed, these same techniques will detail completely, consistently and non ambiguously all the system requirements. Likewise, a set of techniques and algorithms are shown in order to obtain, from the system requirements specification, an automatization of the Process Model logical design, and of the Data Model logical design. Those two models are validated according to the previous requirement specification. Finally, we define several formal procedures that suggest which set of activities to be accomplished in the construction process, and how to carry them out, getting in this way integrity and completness for the different stages of the development process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

New technologies such as, the new Information and Communication Technology ICT, break new paths and redefines the way we understand business, the Cloud Computing is one of them. The on demand resource gathering and the per usage payment scheme are now commonplace, and allows companies to save on their ICT investments. Despite the importance of this issue, we still lack methodologies that help companies, to develop applications oriented for its exploitation in the Cloud. In this study we aim to fill this gap and propose a methodology for the development of ICT applications, which are directed towards a business model, and further outsourcing in the Cloud. In the former the Development of SOA applications, we take, as a baseline scenario, a business model from which to obtain a business process model. To this end, we use software engineering tools; and in the latter The Outsourcing we propose a guide that would facilitate uploading business models into the Cloud; to this end we describe a SOA governance model, which controls the SOA. Additionally we propose a Cloud government that integrates Service Level Agreements SLAs, plus SOA governance, and Cloud architecture. Finally we apply our methodology in an example illustrating our proposal. We believe that our proposal can be used as a guide/pattern for the development of business applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Referencia del artículo comentado: Heise, Lory L. «Violence against women: An integrated, ecological framework». Violence against Women 1998; 4: 262-290.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There are many models in the literature that have been proposed in the last decades aimed at assessing the reliability, availability and maintainability (RAM) of safety equipment, many of them with a focus on their use to assess the risk level of a technological system or to search for appropriate design and/or surveillance and maintenance policies in order to assure that an optimum level of RAM of safety systems is kept during all the plant operational life. This paper proposes a new approach for RAM modelling that accounts for equipment ageing and maintenance and testing effectiveness of equipment consisting of multiple items in an integrated manner. This model is then used to perform the simultaneous optimization of testing and maintenance for ageing equipment consisting of multiple items. An example of application is provided, which considers a simplified High Pressure Injection System (HPIS) of a typical Power Water Reactor (PWR). Basically, this system consists of motor driven pumps (MDP) and motor operated valves (MOV), where both types of components consists of two items each. These components present different failure and cause modes and behaviours, and they also undertake complex test and maintenance activities depending on the item involved. The results of the example of application demonstrate that the optimization algorithm provide the best solutions when the optimization problem is formulated and solved considering full flexibility in the implementation of testing and maintenance activities taking part of such an integrated RAM model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In 1991, Bryant and Eckard estimated the annual probability that a cartel would be detected by the US Federal authorities, conditional on being detected, to be at most between 13 % and 17 %. 15 years later, we estimated the same probability over a European sample and we found an annual probability that falls between 12.9 % and 13.3 %. We also develop a detection model to clarify this probability. Our estimate is based on detection durations, calculated from data reported for all the cartels convicted by the European Commission from 1969 to the present date, and a statistical birth and death process model describing the onset and detection of cartels.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Atualmente, as instituições do ensino superior, onde se inclui a Escola Superior de Desporto de Rio Maior do Instituto Politécnico de Santarém, deparam-se com várias questões e desafios relacionados com a sua acreditação e a dos seus ciclos de estudo, e consequentemente, com a melhoria da qualidade do seu desempenho e o acesso a financiamento. Esta realidade exige novas abordagens e o aumento do nível de exigência a todos os intervenientes que contribuem para a qualidade do serviço prestado. No sentido de dar resposta a estes desafios, o Gabinete de Avaliação e Qualidade tem desenvolvido iniciativas e abordagens das quais o presente trabalho é um exemplo. Com este trabalho pretendeu-se, a partir de numa abordagem de Business Process Management, demonstrar a viabilidade e operacionalidade da utilização de uma ferramenta de Business Process Management System neste contexto. Para tal, realizou-se a modelação do processo de avaliação e acreditação desenvolvido pela Agência de Avaliação e Acreditação do Ensino Superior, através da utilização do Business Process Model and Notation. Esta proposta permitiu modelar os processos na instituição, demonstrando a utilização de uma abordagem Business Process Management numa organização desta natureza, com o objetivo de promover a sua melhoria.