998 resultados para software failure
Resumo:
The availability of a network strongly depends on the frequency of service outages and the recovery time for each outage. The loss of network resources includes complete or partial failure of hardware and software components, power outages, scheduled maintenance such as software and hardware, operational errors such as configuration errors and acts of nature such as floods, tornadoes and earthquakes. This paper proposes a practical approach to the enhancement of QoS routing by means of providing alternative or repair paths in the event of a breakage of a working path. The proposed scheme guarantees that every Protected Node (PN) is connected to a multi-repair path such that no further failure or breakage of single or double repair paths can cause any simultaneous loss of connectivity between an ingress node and an egress node. Links to be protected in an MPLS network are predefined and an LSP request involves the establishment of a working path. The use of multi-protection paths permits the formation of numerous protection paths allowing greater flexibility. Our analysis will examine several methods including single, double and multi-repair routes and the prioritization of signals along the protected paths to improve the Quality of Service (QoS), throughput, reduce the cost of the protection path placement, delay, congestion and collision.
Resumo:
o trabalho analisa a indústria nacional de software, em especial, o software para exportação e as fábricas de software, focalizando as estratégias e os desafios do empresariado nacional da Tecnologia da Informação e Comunicação que atua neste segmento. Tendo por base considerações sobre o taylorismo, o fordismo, a flexibilidade do trabalho, a legislação trabalhista vigente, a competitividade do mercado de software, a maturidade nos processos de gestão e a responsabilidade social das empresas, colocou-se em perspectiva os principais fatores que podem influenciar o sucesso ou o fracasso das empresas nacionais de Tecnologia da Informação e Comunicação que investem no segmento de fábrica de software;
Resumo:
In businesses such as the software industry, which uses knowledge as a resource, activities are knowledge intensive, requiring constant adoption of new technologies and practices. Another feature of this environment is that the industry is particularly susceptible to failure; with this in mind, the objective of this research is to analyze the integration of Knowledge Management techniques into the activity of risk management as it applies to software development projects of micro and small Brazilian incubated technology-based firms. Research methods chosen were the Multiple Case Study. The main risk factor for managers and developers is that scope or goals are often unclear or misinterpreted. For risk management, firms have found that Knowledge Management techniques of conversion combination would be the most applicable for use; however, those most commonly used refer to the conversion mode as internalization.. © 2013 Elsevier Ltd. APM and IPMA.
Resumo:
When a bolted joint is loaded in tension with dynamically, part of this load is absorbed by the bolt and rest is absorbed by the joint material. What determines the portion that is to absorbed by the bolt is the joint stiffness factor. This factor influences the tension which corresponds to pre-load and the safety factor for fatigue failure, thus being an important factor in the design of bolted joints. In this work, three methods of calculating the stiffness factor are compared through a spreadsheet in Excel software. The ratio of initial pre-load and the safety factor for fatigue failure depending on the stiffness factor graph is generated. The calculations for each method show results with a small difference. It is therefore recommended that each project case is analyzed, and depending on its conditions and the range of stiffness values, the more or less rigid method about the safety factor for fatigue failure is chosen. In general, the approximation method provides consistent results and can be easily calculated
Resumo:
The objective of this study was to develop a model that allows testing in the wind tunnel at high angles of attack and validates its most critical components by analyzing the results of simulations in finite element software. During the project this structure suffered major loads identified during the flight conditions and, from these, we calculated the stresses in critical regions defined as the parts of the model that have higher failure probabilities. All aspects associated with Load methods, mesh refining and stress analysis were taken into account in this approach. The selection of the analysis software was based on project needs, seeking greater ease of modeling and simulation. We opted for the software ANSYS® since the entire project is being developed in CAD platforms enabling a friendly integration between software's modeling and analysis
Resumo:
[ES]El objetivo de este Trabajo es el de actualizar un entorno de gestión de bases de datos existente a la versión 11.2 del software de bases de datos Oracle y a una plataforma hardware de última generación. Se migran con tiempo de parada cero varias bases de datos dispersas en distintos servidores a un entorno consolidado de dos nodos dispuestos en alta disponibilidad tipo "activo-activo" mediante Oracle RAC y respaldado por un entorno de contingencia totalmente independiente y sincronizado en tiempo real mediante Oracle GoldenGate. Se realiza un estudio del entorno actual y, realizando una estimación de crecimiento, se propone una configuración de hardware y software mínima para implementar con garantías de éxito los requerimientos del entorno de gestión de bases de datos a corto y medio plazo. Una vez adquirido el hardware, se lleva a cabo la instalación, actualización y configuración del Sistema Operativo y el acceso redundado de los servidores a la cabina de almacenamiento. Posteriormente se instala el software de clúster de Oracle, el software de la base de datos y se crea una instancia que albergará los esquemas requeridos de las bases de datos a consolidar. Seguidamente se migran los esquemas al entorno consolidado y se establece la replicación de éstos en tiempo real con la máquina de contingencia usando en ambos casos Oracle GoldenGate. Finalmente se crea y prueba un esquema de copias de seguridad que incluye copias lógicas y físicas de la propia base de datos y de archivos de configuración del clúster a partir de los cuales será posible restaurar el entorno completamente.
Resumo:
Gaining economic benefits from substantially lower labor costs has been reported as a major reason for offshoring labor-intensive information systems services to low-wage countries. However, if wage differences are so high, why is there such a high level of variation in the economic success between offshored IS projects? This study argues that offshore outsourcing involves a number of extra costs for the ^his paper was recommended for acceptance by Associate Guest Editor Erran Carmel. client organization that account for the economic failure of offshore projects. The objective is to disaggregate these extra costs into their constituent parts and to explain why they differ between offshored software projects. The focus is on software development and maintenance projects that are offshored to Indian vendors. A theoretical framework is developed a priori based on transaction cost economics (TCE) and the knowledge-based view of the firm, comple mented by factors that acknowledge the specific offshore context The framework is empirically explored using a multiple case study design including six offshored software projects in a large German financial service institution. The results of our analysis indicate that the client incurs post contractual extra costs for four types of activities: (1) re quirements specification and design, (2) knowledge transfer, (3) control, and (4) coordination. In projects that require a high level of client-specific knowledge about idiosyncratic business processes and software systems, these extra costs were found to be substantially higher than in projects where more general knowledge was needed. Notably, these costs most often arose independently from the threat of oppor tunistic behavior, challenging the predominant TCE logic of market failure. Rather, the client extra costs were parti cularly high in client-specific projects because the effort for managing the consequences of the knowledge asymmetries between client and vendor was particularly high in these projects. Prior experiences of the vendor with related client projects were found to reduce the level of extra costs but could not fully offset the increase in extra costs in highly client-specific projects. Moreover, cultural and geographic distance between client and vendor as well as personnel turnover were found to increase client extra costs. Slight evidence was found, however, that the cost-increasing impact of these factors was also leveraged in projects with a high level of required client-specific knowledge (moderator effect).
Resumo:
BACKGROUND The cost-effectiveness of routine viral load (VL) monitoring of HIV-infected patients on antiretroviral therapy (ART) depends on various factors that differ between settings and across time. Low-cost point-of-care (POC) tests for VL are in development and may make routine VL monitoring affordable in resource-limited settings. We developed a software tool to study the cost-effectiveness of switching to second-line ART with different monitoring strategies, and focused on POC-VL monitoring. METHODS We used a mathematical model to simulate cohorts of patients from start of ART until death. We modeled 13 strategies (no 2nd-line, clinical, CD4 (with or without targeted VL), POC-VL, and laboratory-based VL monitoring, with different frequencies). We included a scenario with identical failure rates across strategies, and one in which routine VL monitoring reduces the risk of failure. We compared lifetime costs and averted disability-adjusted life-years (DALYs). We calculated incremental cost-effectiveness ratios (ICER). We developed an Excel tool to update the results of the model for varying unit costs and cohort characteristics, and conducted several sensitivity analyses varying the input costs. RESULTS Introducing 2nd-line ART had an ICER of US$1651-1766/DALY averted. Compared with clinical monitoring, the ICER of CD4 monitoring was US$1896-US$5488/DALY averted and VL monitoring US$951-US$5813/DALY averted. We found no difference between POC- and laboratory-based VL monitoring, except for the highest measurement frequency (every 6 months), where laboratory-based testing was more effective. Targeted VL monitoring was on the cost-effectiveness frontier only if the difference between 1st- and 2nd-line costs remained large, and if we assumed that routine VL monitoring does not prevent failure. CONCLUSION Compared with the less expensive strategies, the cost-effectiveness of routine VL monitoring essentially depends on the cost of 2nd-line ART. Our Excel tool is useful for determining optimal monitoring strategies for specific settings, with specific sex-and age-distributions and unit costs.
Resumo:
Objective. To determine whether transforming growth factor beta (TGF-β) receptor blockade using an oral antagonist has an effect on cardiac myocyte size in the hearts of transgenic mice with a heart failure phenotype. ^ Methods. In this pilot experimental study, cardiac tissue sections from the hearts of transgenic mice overexpressing tumor necrosis factor (MHCsTNF mice) having a phenotype of heart failure and wild-type mice, treated with an orally available TGF-β receptor antagonist were stained with wheat germ agglutinin to delineate the myocyte cell membrane and imaged using fluorescence microscopy. Using MetaVue software, the cardiac myocyte circumference was traced and the cross sectional area (CSA) of individual myocytes were measured. Measurements were repeated at the epicardial, mid-myocardial and endocardial levels to ensure adequate sampling and to minimize the effect of regional variations in myocyte size. ANOVA testing with post-hoc pairwise comparisons was done to assess any difference between the drug-treated and diluent-treated groups. ^ Results. There were no statistically significant differences in the average myocyte CSA measured at the epicardial, mid-myocardial or endocardial levels between diluent treated littermate control mice, drug treated normal mice, diluent-treated transgenic mice and drug-treated transgenic mice. There was no difference between the average pan-myocardial cross sectional area between any of the four groups mentioned above. ^ Conclusions. TGF-β receptor blockade using oral TGF-β receptor antagonist does not alter myocyte size in MHCsTNF mice that have a phenotype of heart failure. ^
Resumo:
In the last decades, software systems have become an intrinsic element in our daily lives. Software exists in our computers, in our cars, and even in our refrigerators. Today’s world has become heavily dependent on software and yet, we still struggle to deliver quality software products, on-time and within budget. When searching for the causes of such alarming scenario, we find concurrent voices pointing to the role of the project manager. But what is project management and what makes it so challenging? Part of the answer to this question requires a deeper analysis of why software project managers have been largely ineffective. Answering this question might assist current and future software project managers in avoiding, or at least effectively mitigating, problematic scenarios that, if unresolved, will eventually lead to additional failures. This is where anti-patterns come into play and where they can be a useful tool in identifying and addressing software project management failure. Unfortunately, anti-patterns are still a fairly recent concept, and thus, available information is still scarce and loosely organized. This thesis will attempt to help remedy this scenario. The objective of this work is to help organize existing, documented software project management anti-patterns by answering our two research questions: · What are the different anti-patterns in software project management? · How can these anti-patterns be categorized?
Resumo:
En las últimas dos décadas, se ha puesto de relieve la importancia de los procesos de adquisición y difusión del conocimiento dentro de las empresas, y por consiguiente el estudio de estos procesos y la implementación de tecnologías que los faciliten ha sido un tema que ha despertado un creciente interés en la comunidad científica. Con el fin de facilitar y optimizar la adquisición y la difusión del conocimiento, las organizaciones jerárquicas han evolucionado hacia una configuración más plana, con estructuras en red que resulten más ágiles, disminuyendo la dependencia de una autoridad centralizada, y constituyendo organizaciones orientadas a trabajar en equipo. Al mismo tiempo, se ha producido un rápido desarrollo de las herramientas de colaboración Web 2.0, tales como blogs y wikis. Estas herramientas de colaboración se caracterizan por una importante componente social, y pueden alcanzar todo su potencial cuando se despliegan en las estructuras organizacionales planas. La Web 2.0 aparece como un concepto enfrentado al conjunto de tecnologías que existían a finales de los 90s basadas en sitios web, y se basa en la participación de los propios usuarios. Empresas del Fortune 500 –HP, IBM, Xerox, Cisco– las adoptan de inmediato, aunque no hay unanimidad sobre su utilidad real ni sobre cómo medirla. Esto se debe en parte a que no se entienden bien los factores que llevan a los empleados a adoptarlas, lo que ha llevado a fracasos en la implantación debido a la existencia de algunas barreras. Dada esta situación, y ante las ventajas teóricas que tienen estas herramientas de colaboración Web 2.0 para las empresas, los directivos de éstas y la comunidad científica muestran un interés creciente en conocer la respuesta a la pregunta: ¿cuáles son los factores que contribuyen a que los empleados de las empresas adopten estas herramientas Web 2.0 para colaborar? La respuesta a esta pregunta es compleja ya que se trata de herramientas relativamente nuevas en el contexto empresarial mediante las cuales se puede llevar a cabo la gestión del conocimiento en lugar del manejo de la información. El planteamiento que se ha llevado a cabo en este trabajo para dar respuesta a esta pregunta es la aplicación de los modelos de adopción tecnológica, que se basan en las percepciones de los individuos sobre diferentes aspectos relacionados con el uso de la tecnología. Bajo este enfoque, este trabajo tiene como objetivo principal el estudio de los factores que influyen en la adopción de blogs y wikis en empresas, mediante un modelo predictivo, teórico y unificado, de adopción tecnológica, con un planteamiento holístico a partir de la literatura de los modelos de adopción tecnológica y de las particularidades que presentan las herramientas bajo estudio y en el contexto especifico. Este modelo teórico permitirá determinar aquellos factores que predicen la intención de uso de las herramientas y el uso real de las mismas. El trabajo de investigación científica se estructura en cinco partes: introducción al tema de investigación, desarrollo del marco teórico, diseño del trabajo de investigación, análisis empírico, y elaboración de conclusiones. Desde el punto de vista de la estructura de la memoria de la tesis, las cinco partes mencionadas se desarrollan de forma secuencial a lo largo de siete capítulos, correspondiendo la primera parte al capítulo 1, la segunda a los capítulos 2 y 3, la tercera parte a los capítulos 4 y 5, la cuarta parte al capítulo 6, y la quinta y última parte al capítulo 7. El contenido del capítulo 1 se centra en el planteamiento del problema de investigación así como en los objetivos, principal y secundarios, que se pretenden cumplir a lo largo del trabajo. Así mismo, se expondrá el concepto de colaboración y su encaje con las herramientas colaborativas Web 2.0 que se plantean en la investigación y una introducción a los modelos de adopción tecnológica. A continuación se expone la justificación de la investigación, los objetivos de la misma y el plan de trabajo para su elaboración. Una vez introducido el tema de investigación, en el capítulo 2 se lleva a cabo una revisión de la evolución de los principales modelos de adopción tecnológica existentes (IDT, TRA, SCT, TPB, DTPB, C-TAM-TPB, UTAUT, UTAUT2), dando cuenta de sus fundamentos y factores empleados. Sobre la base de los modelos de adopción tecnológica expuestos en el capítulo 2, en el capítulo 3 se estudian los factores que se han expuesto en el capítulo 2 pero adaptados al contexto de las herramientas colaborativas Web 2.0. Con el fin de facilitar la comprensión del modelo final, los factores se agrupan en cuatro tipos: tecnológicos, de control, socio-normativos y otros específicos de las herramientas colaborativas. En el capítulo 4 se lleva a cabo la relación de los factores que son más apropiados para estudiar la adopción de las herramientas colaborativas y se define un modelo que especifica las relaciones entre los diferentes factores. Estas relaciones finalmente se convertirán en hipótesis de trabajo, y que habrá que contrastar mediante el estudio empírico. A lo largo del capítulo 5 se especifican las características del trabajo empírico que se lleva a cabo para contrastar las hipótesis que se habían enunciado en el capítulo 4. La naturaleza de la investigación es de carácter social, de tipo exploratorio, y se basa en un estudio empírico cuantitativo cuyo análisis se llevará a cabo mediante técnicas de análisis multivariante. En este capítulo se describe la construcción de las escalas del instrumento de medida, la metodología de recogida de datos, y posteriormente se presenta un análisis detallado de la población muestral, así como la comprobación de la existencia o no del sesgo atribuible al método de medida, lo que se denomina sesgo de método común (en inglés, Common Method Bias). El contenido del capítulo 6 corresponde al análisis de resultados, aunque previamente se expone la técnica estadística empleada, PLS-SEM, como herramienta de análisis multivariante con capacidad de análisis predictivo, así como la metodología empleada para validar el modelo de medida y el modelo estructural, los requisitos que debe cumplir la muestra, y los umbrales de los parámetros considerados. En la segunda parte del capítulo 6 se lleva a cabo el análisis empírico de los datos correspondientes a las dos muestras, una para blogs y otra para wikis, con el fin de validar las hipótesis de investigación planteadas en el capítulo 4. Finalmente, en el capítulo 7 se revisa el grado de cumplimiento de los objetivos planteados en el capítulo 1 y se presentan las contribuciones teóricas, metodológicas y prácticas derivadas del trabajo realizado. A continuación se exponen las conclusiones generales y detalladas por cada grupo de factores, así como las recomendaciones prácticas que se pueden extraer para orientar la implantación de estas herramientas en situaciones reales. Como parte final del capítulo se incluyen las limitaciones del estudio y se sugiere una serie de posibles líneas de trabajo futuras de interés, junto con los resultados de investigación parciales que se han obtenido durante el tiempo que ha durado la investigación. ABSTRACT In the last two decades, the relevance of knowledge acquisition and dissemination processes has been highlighted and consequently, the study of these processes and the implementation of the technologies that make them possible has generated growing interest in the scientific community. In order to ease and optimize knowledge acquisition and dissemination, hierarchical organizations have evolved to a more horizontal configuration with more agile net structures, decreasing the dependence of a centralized authority, and building team-working oriented organizations. At the same time, Web 2.0 collaboration tools such as blogs and wikis have quickly developed. These collaboration tools are characterized by a strong social component and can reach their full potential when they are deployed in horizontal organization structures. Web 2.0, based on user participation, arises as a concept to challenge the existing technologies of the 90’s which were based on websites. Fortune 500 companies – HP, IBM, Xerox, Cisco- adopted the concept immediately even though there was no unanimity about its real usefulness or how it could be measured. This is partly due to the fact that the factors that make the drivers for employees to adopt these tools are not properly understood, consequently leading to implementation failure due to the existence of certain barriers. Given this situation, and faced with theoretical advantages that these Web 2.0 collaboration tools seem to have for companies, managers and the scientific community are showing an increasing interest in answering the following question: Which factors contribute to the decision of the employees of a company to adopt the Web 2.0 tools for collaborative purposes? The answer is complex since these tools are relatively new in business environments. These tools allow us to move from an information Management approach to Knowledge Management. In order to answer this question, the chosen approach involves the application of technology adoption models, all of them based on the individual’s perception of the different aspects related to technology usage. From this perspective, this thesis’ main objective is to study the factors influencing the adoption of blogs and wikis in a company. This is done by using a unified and theoretical predictive model of technological adoption with a holistic approach that is based on literature of technological adoption models and the particularities that these tools presented under study and in a specific context. This theoretical model will allow us to determine the factors that predict the intended use of these tools and their real usage. The scientific research is structured in five parts: Introduction to the research subject, development of the theoretical framework, research work design, empirical analysis and drawing the final conclusions. This thesis develops the five aforementioned parts sequentially thorough seven chapters; part one (chapter one), part two (chapters two and three), part three (chapters four and five), parte four (chapters six) and finally part five (chapter seven). The first chapter is focused on the research problem statement and the objectives of the thesis, intended to be reached during the project. Likewise, the concept of collaboration and its link with the Web 2.0 collaborative tools is discussed as well as an introduction to the technology adoption models. Finally we explain the planning to carry out the research and get the proposed results. After introducing the research topic, the second chapter carries out a review of the evolution of the main existing technology adoption models (IDT, TRA, SCT, TPB, DTPB, C-TAM-TPB, UTAUT, UTAUT2), highlighting its foundations and factors used. Based on technology adoption models set out in chapter 2, the third chapter deals with the factors which have been discussed previously in chapter 2, but adapted to the context of Web 2.0 collaborative tools under study, blogs and wikis. In order to better understand the final model, the factors are grouped into four types: technological factors, control factors, social-normative factors and other specific factors related to the collaborative tools. The first part of chapter 4 covers the analysis of the factors which are more relevant to study the adoption of collaborative tools, and the second part proceeds with the theoretical model which specifies the relationship between the different factors taken into consideration. These relationships will become specific hypotheses that will be tested by the empirical study. Throughout chapter 5 we cover the characteristics of the empirical study used to test the research hypotheses which were set out in chapter 4. The nature of research is social, exploratory, and it is based on a quantitative empirical study whose analysis is carried out using multivariate analysis techniques. The second part of this chapter includes the description of the scales of the measuring instrument; the methodology for data gathering, the detailed analysis of the sample, and finally the existence of bias attributable to the measurement method, the "Bias Common Method" is checked. The first part of chapter 6 corresponds to the analysis of results. The statistical technique employed (PLS-SEM) is previously explained as a tool of multivariate analysis, capable of carrying out predictive analysis, and as the appropriate methodology used to validate the model in a two-stages analysis, the measurement model and the structural model. Futhermore, it is necessary to check the requirements to be met by the sample and the thresholds of the parameters taken into account. In the second part of chapter 6 an empirical analysis of the data is performed for the two samples, one for blogs and the other for wikis, in order to validate the research hypothesis proposed in chapter 4. Finally, in chapter 7 the fulfillment level of the objectives raised in chapter 1 is reviewed and the theoretical, methodological and practical conclusions derived from the results of the study are presented. Next, we cover the general conclusions, detailing for each group of factors including practical recommendations that can be drawn to guide implementation of these tools in real situations in companies. As a final part of the chapter the limitations of the study are included and a number of potential future researches suggested, along with research partial results which have been obtained thorough the research.
Resumo:
In the last decades accumulated clinical evidence has proven that intra-operative radiation therapy (IORT) is a very valuable technique. In spite of that, planning technology has not evolved since its conception, being outdated in comparison to current state of the art in other radiotherapy techniques and therefore slowing down the adoption of IORT. RADIANCE is an IORT planning system, CE and FDA certified, developed by a consortium of companies, hospitals and universities to overcome such technological backwardness. RADIANCE provides all basic radiotherapy planning tools which are specifically adapted to IORT. These include, but are not limited to image visualization, contouring, dose calculation algorithms-Pencil Beam (PB) and Monte Carlo (MC), DVH calculation and reporting. Other new tools, such as surgical simulation tools have been developed to deal with specific conditions of the technique. Planning with preoperative images (preplanning) has been evaluated and the validity of the system being proven in terms of documentation, treatment preparation, learning as well as improvement of surgeons/radiation oncologists (ROs) communication process. Preliminary studies on Navigation systems envisage benefits on how the specialist to accurately/safely apply the pre-plan into the treatment, updating the plan as needed. Improvements on the usability of this kind of systems and workflow are needed to make them more practical. Preliminary studies on Intraoperative imaging could provide an improved anatomy for the dose computation, comparing it with the previous pre-plan, although not all devices in the market provide good characteristics to do so. DICOM.RT standard, for radiotherapy information exchange, has been updated to cover IORT particularities and enabling the possibility of dose summation with external radiotherapy. The effect of this planning technology on the global risk of the IORT technique has been assessed and documented as part of a failure mode and effect analysis (FMEA). Having these technological innovations and their clinical evaluation (including risk analysis) we consider that RADIANCE is a very valuable tool to the specialist covering the demands from professional societies (AAPM, ICRU, EURATOM) for current radiotherapy procedures.
Resumo:
Computer-based, socio-technical systems projects are frequently failures. In particular, computer-based information systems often fail to live up to their promise. Part of the problem lies in the uncertainty of the effect of combining the subsystems that comprise the complete system; i.e. the system's emergent behaviour cannot be predicted from a knowledge of the subsystems. This paper suggests uncertainty management is a fundamental unifying concept in analysis and design of complex systems and goes on to indicate that this is due to the co-evolutionary nature of the requirements and implementation of socio-technical systems. The paper shows a model of the propagation of a system change that indicates that the introduction of two or more changes over time can cause chaotic emergent behaviour.
Resumo:
Over the past years, the paradigm of component-based software engineering has been established in the construction of complex mission-critical systems. Due to this trend, there is a practical need for techniques that evaluate critical properties (such as safety, reliability, availability or performance) of these systems. In this paper, we review several high-level techniques for the evaluation of safety properties for component-based systems and we propose a new evaluation model (State Event Fault Trees) that extends safety analysis towards a lower abstraction level. This model possesses a state-event semantics and strong encapsulation, which is especially useful for the evaluation of component-based software systems. Finally, we compare the techniques and give suggestions for their combined usage