942 resultados para FAILURE ANALYSIS
Resumo:
Trastuzumab is a humanized-monoclonal antibody, developed specifically for HER2-neu over-expressed breast cancer patients. Although highly effective and well tolerated, it was reported associated with Congestive Heart Failure (CHF) in clinical trial settings (up to 27%). This leaves a gap where, Trastuzumab-related CHF rate in general population, especially older breast cancer patients with long term treatment of Trastuzumab remains unknown. This thesis examined the rates and risk factors associated with Trastuzumab-related CHF in a large population of older breast cancer patients. A retrospective cohort study using the existing Surveillance, Epidemiology and End Results (SEER) and Medicare linked de-identified database was performed. Breast cancer patients ≥ 66 years old, stage I-IV, diagnosed in 1998-2007, fully covered by Medicare but no HMO within 1-year before and after first diagnosis month, received 1st chemotherapy no earlier than 30 days prior to diagnosis were selected as study cohort. The primary outcome of this study is a diagnosis of CHF after starting chemotherapy but none CHF claims on or before cancer diagnosis date. ICD-9 and HCPCS codes were used to pool the claims for Trastuzumab use, chemotherapy, comorbidities and CHF claims. Statistical analysis including comparison of characteristics, Kaplan-Meier survival estimates of CHF rates for long term follow up, and Multivariable Cox regression model using Trastuzumab as a time-dependent variable were performed. Out of 17,684 selected cohort, 2,037 (12%) received Trastuzumab. Among them, 35% (714 out of 2037) were diagnosed with CHF, compared to 31% (4784 of 15647) of CHF rate in other chemotherapy recipients (p<.0001). After 10 years of follow-up, 65% of Trastuzumab users developed CHF, compared to 47% in their counterparts. After adjusting for patient demographic, tumor and clinical characteristics, older breast cancer patients who used Trastuzumab showed a significantly higher risk in developing CHF than other chemotherapy recipients (HR 1.69, 95% CI 1.54 - 1.85). And this risk is increased along with the increment of age (p-value < .0001). Among Trastuzumab users, these covariates also significantly increased the risk of CHF: older age, stage IV, Non-Hispanic black race, unmarried, comorbidities, Anthracyclin use, Taxane use, and lower educational level. It is concluded that, Trastuzumab users in older breast cancer patients had 69% higher risk in developing CHF than non-Trastuzumab users, much higher than the 27% increase reported in younger clinical trial patients. Older age, Non-Hispanic black race, unmarried, comorbidity, combined use with Anthracycline or Taxane also significantly increase the risk of CHF development in older patients treated with Trastuzumab. ^
Resumo:
Mixture modeling is commonly used to model categorical latent variables that represent subpopulations in which population membership is unknown but can be inferred from the data. In relatively recent years, the potential of finite mixture models has been applied in time-to-event data. However, the commonly used survival mixture model assumes that the effects of the covariates involved in failure times differ across latent classes, but the covariate distribution is homogeneous. The aim of this dissertation is to develop a method to examine time-to-event data in the presence of unobserved heterogeneity under a framework of mixture modeling. A joint model is developed to incorporate the latent survival trajectory along with the observed information for the joint analysis of a time-to-event variable, its discrete and continuous covariates, and a latent class variable. It is assumed that the effects of covariates on survival times and the distribution of covariates vary across different latent classes. The unobservable survival trajectories are identified through estimating the probability that a subject belongs to a particular class based on observed information. We applied this method to a Hodgkin lymphoma study with long-term follow-up and observed four distinct latent classes in terms of long-term survival and distributions of prognostic factors. Our results from simulation studies and from the Hodgkin lymphoma study demonstrated the superiority of our joint model compared with the conventional survival model. This flexible inference method provides more accurate estimation and accommodates unobservable heterogeneity among individuals while taking involved interactions between covariates into consideration.^
Resumo:
The purpose of this dissertation was to develop a conceptual framework which can be used to account for policy decisions made by the House Ways and Means Committee (HW&MC) of the Texas House of Representatives. This analysis will examine the actions of the committee over a ten-year period with the goal of explaining and predicting the success of failure of certain efforts to raise revenue.^ The basis framework for modelling the revenue decision-making process includes three major components--the decision alternatives, the external factors and two competing contingency theories. The decision alternatives encompass the particular options available to increase tax revenue. The options were classified as non-innovative or innovative. The non-innovative options included the sales, franchise, property and severance taxes. The innovative options were principally the personal and corporate income taxes.^ The external factors included political and economic constraints that affected the actions of the HW&MC. Several key political constraints on committee decision-making were addressed--including public attitudes, interest groups, political party strength and tradition and precedents. The economic constraints that affected revenue decisions included court mandates, federal mandates and the fiscal condition of the nation and the state.^ The third component of the revenue decision-making framework included two alternative contingency theories. The first alternative theory postulated that the committee structure, including the individual member roles and the overall committee style, resulted in distinctive revenue decisions. This theory will be favored if evidence points to the committee acting autonomously with less concern for the policies of the Speaker of the House. The Speaker assignment theory, postulated that the assignment of committee members shaped or changed the course of committee decision-making. This theory will be favored if there was evidence that the committee was strictly a vehicle for the Speaker to institute his preferred tax policies.^ The ultimate goal of this analysis is to develop an explanation for legislative decision-making about tax policy. This explanation will be based on the linkages across various tax options, political and economic constraints, member roles and committee style and the patterns of committee assignment. ^
Resumo:
During vertebrate embryogenesis, cells from the paraxial mesoderm coalesce in a rostral-to-caudal progression to form the somites. Subsequent compartmentalization of the somites yields the sclerotome, myotome and dermatome, which give rise to the axial skeleton, axial musculature, and dermis, respectively. Recently, we cloned a novel basic-Helix-Loop-Helix (bHLH) protein, called scleraxis, which is expressed in the sclerotome, in mesenchymal precursors of bone and cartilage, and in connective tissues. This dissertation focuses on the cloning, expression and functional analysis of a bHLH protein termed paraxis, which is nearly identical to scleraxis within the bHLH region but diverges in both its amino and carboxyl termini. During the process of mouse embryogenesis, paraxis transcripts are first detected at about day 7.5 post coitum within the primitive mesoderm lying posterior to the head and heart primordia. Subsequently, paraxis expression progresses caudally through the paraxial mesoderm, immediately preceding somite formation. Paraxis is expressed at high levels in newly formed somites before the first detectable expression of the myogenic bHLH genes, and as the somite becomes compartmentalized, paraxis becomes downregulated within the myotome.^ To determine the function of paraxis during mammalian embryogenesis, mice were generated with a null mutation in the paraxis locus. Paraxis null mice survived until birth, but exhibited severe foreshortening along the anteroposterior axis due to the absence of vertebrae caudal to the midthoracic region. The phenotype also included axial skeletal defects, particularly shortened bifurcated ribs which were detached from the vertebral column, fused vertebrae and extensive truncation and disorganization caudal to the hindlimbs. Mutant neonates also lacked normal levels of trunk muscle and exhibited defects in the dermis as well as the stratification of the epidermis. Analysis of paraxis -/- mutant embryos has revealed a failure of the somites to both properly epithelialize and compartmentalize, resulting in defects in somite-derived cell lineages. These results suggest that paraxis is an essential component of the genetic pathway regulating somitogenesis. ^
Resumo:
BACKGROUND Pyogenic tonsillitis may often be observed in the general Western population. In severe cases, it may require antibiotic treatment or even hospitalization and often a prompt clinical response will be noted. Here we present an unusual case of progressive multiple organ failure including fulminant liver failure following acute tonsillitis initially mistaken for "classic" pyogenic (that is bacterial) tonsillitis. CASE PRESENTATION A 68-year-old previously healthy white man was referred with suspicion of pyogenic angina. After tonsillectomy, he developed acute liver failure and consecutive multiple organ failure including acute hemodynamic, pulmonary and dialysis-dependent renal failure. Immunohistopathological analysis of his tonsils and liver as well as serum polymerase chain reaction analyses revealed herpes simplex virus-2 to be the causative pathogen. Treatment included high-dose acyclovir and multiorgan supportive intensive care therapy. His final outcome was favorable. CONCLUSIONS Fulminant herpes simplex virus-2-induced multiple organ failure is rarely observed in the Western hemisphere and should be considered a potential diagnosis in patients with tonsillitis and multiple organ failure including acute liver failure. From a clinical perspective, it seems important to note that fulminant herpes simplex virus-2 infection may masquerade as "routine" bacterial severe sepsis/septic shock. This persevering condition should be diagnosed early and treated goal-oriented in order to gain control of this life-threatening condition.
Resumo:
Many of the material models most frequently used for the numerical simulation of the behavior of concrete when subjected to high strain rates have been originally developed for the simulation of ballistic impact. Therefore, they are plasticity-based models in which the compressive behavior is modeled in a complex way, while their tensile failure criterion is of a rather simpler nature. As concrete elements usually fail in tensión when subjected to blast loading, available concrete material models for high strain rates may not represent accurately their real behavior. In this research work an experimental program of reinforced concrete fíat elements subjected to blast load is presented. Altogether four detonation tests are conducted, in which 12 slabs of two different concrete types are subjected to the same blast load. The results of the experimental program are then used for the development and adjustment of numerical tools needed in the modeling of concrete elements subjected to blast.
Resumo:
Quasi-monocrystalline silicon wafers have appeared as a critical innovation in the PV industry, joining the most favourable characteristics of the conventional substrates: the higher solar cell efficiencies of monocrystalline Czochralski-Si (Cz-Si) wafers and the lower cost and the full square-shape of the multicrystalline ones. However, the quasi-mono ingot growth can lead to a different defect structure than the typical Cz-Si process. Thus, the properties of the brand-new quasi-mono wafers, from a mechanical point of view, have been for the first time studied, comparing their strength with that of both Cz-Si mono and typical multicrystalline materials. The study has been carried out employing the four line bending test and simulating them by means of FE models. For the analysis, failure stresses were fitted to a three-parameter Weibull distribution. High mechanical strength was found in all the cases. The low quality quasi-mono wafers, interestingly, did not exhibit critical strength values for the PV industry, despite their noticeable density of extended defects.
Resumo:
We present a tutorial overview of Ciaopp, the Ciao system preprocessor. Ciao is a public-domain, next-generation logic programming system, which subsumes ISO-Prolog and is specifically designed to a) be highly extensible via librarles and b) support modular program analysis, debugging, and optimization. The latter tasks are performed in an integrated fashion by Ciaopp. Ciaopp uses modular, incremental abstract interpretation to infer properties of program predicates and literals, including types, variable instantiation properties (including modes), non-failure, determinacy, bounds on computational cost, bounds on sizes of terms in the program, etc. Using such analysis information, Ciaopp can find errors at compile-time in programs and/or perform partial verification. Ciaopp checks how programs cali system librarles and also any assertions present in the program or in other modules used by the program. These assertions are also used to genérate documentation automatically. Ciaopp also uses analysis information to perform program transformations and optimizations such as múltiple abstract specialization, parallelization (including granularity control), and optimization of run-time tests for properties which cannot be checked completely at compile-time. We illustrate "hands-on" the use of Ciaopp in all these tasks. By design, Ciaopp is a generic tool, which can be easily tailored to perform these and other tasks for different LP and CLP dialects.
Resumo:
We discuss a framework for the application of abstract interpretation as an aid during program development, rather than in the more traditional application of program optimization. Program validation and detection of errors is first performed statically by comparing (partial) specifications written in terms of assertions against information obtained from (global) static analysis of the program. The results of this process are expressed in the user assertion language. Assertions (or parts of assertions) which cannot be checked statically are translated into run-time tests. The framework allows the use of assertions to be optional. It also allows using very general properties in assertions, beyond the predefined set understandable by the static analyzer and including properties defined by user programs. We also report briefly on an implementation of the framework. The resulting tool generates and checks assertions for Prolog, CLP(R), and CHIP/CLP(fd) programs, and integrates compile-time and run-time checking in a uniform way. The tool allows using properties such as types, modes, non-failure, determinacy, and computational cost, and can treat modules separately, performing incremental analysis.
Resumo:
We present a framework for the application of abstract interpretation as an aid during program development, rather than in the more traditional application of program optimization. Program validation and detection of errors is first performed statically by comparing (partial) specifications written in terms of assertions against information obtained from static analysis of the program. The results of this process are expressed in the user assertion language. Assertions (or parts of assertions) which cannot be verified statically are translated into run-time tests. The framework allows the use of assertions to be optional. It also allows using very general properties in assertions, beyond the predefined set understandable by the static analyzer and including properties defined by means of user programs. We also report briefly on an implementation of the framework. The resulting tool generates and checks assertions for Prolog, CLP(R), and CHIP/CLP(fd) programs, and integrates compile-time and run-time checking in a uniform way. The tool allows using properties such as types, modes, non-failure, determinacy, and computational cost, and can treat modules separately, performing incremental analysis. In practice, this modularity allows detecting statically bugs in user programs even if they do not contain any assertions.
Resumo:
The failure locus, the characteristics of the stress–strain curve and the damage localization patterns were analyzed in a polypropylene nonwoven fabric under in-plane biaxial deformation. The analysis was carried out by means of a homogenization model developed within the context of the finite element method. It provides the constitutive response for a mesodomain of the fabric corresponding to the area associated to a finite element and takes into account the main deformation and damage mechanisms experimentally observed. It was found that the failure locus in the stress space was accurately predicted by the Von Mises criterion and failure took place by the localization of damage into a crack perpendicular to the main loading axis.
Resumo:
The direct application of existing models for seed germination may often be inadequate in the context of ecology and forestry germination experiments. This is because basic model assumptions are violated and variables available to forest managers are rarely used. In this paper, we present a method which addresses the aforementioned shortcomings. The approach is illustrated through a case study of Pinus pinea L. Our findings will also shed light on the role of germination in the general failure of natural regeneration in managed forests of this species. The presented technique consists of a mixed regression model based on survival analysis. Climate and stand covariates were tested. Data for fitting the model were gathered from a 5-year germination experiment in a mature, managed P. pinea stand in the Northern Plateau of Spain in which two different stand densities can be found. The model predictions proved to be unbiased and highly accurate when compared with the training data. Germination in P. pinea was controlled through thermal variables at stand level. At microsite level, low densities negatively affected the probability of germination. A time-lag in the response was also detected. Overall, the proposed technique provides a reliable alternative to germination modelling in ecology/forestry studies by using accessible/ suitable variables. The P. pinea case study highlights the importance of producing unbiased predictions. In this species, the occurrence and timing of germination suggest a very different regeneration strategy from that understood by forest managers until now, which may explain the high failure rate of natural regeneration in managed stands. In addition, these findings provide valuable information for the management of P. pinea under climate-change conditions.
Resumo:
El hormigón es uno de los materiales de construcción más empleados en la actualidad debido a sus buenas prestaciones mecánicas, moldeabilidad y economía de obtención, entre otras ventajas. Es bien sabido que tiene una buena resistencia a compresión y una baja resistencia a tracción, por lo que se arma con barras de acero para formar el hormigón armado, material que se ha convertido por méritos propios en la solución constructiva más importante de nuestra época. A pesar de ser un material profusamente utilizado, hay aspectos del comportamiento del hormigón que todavía no son completamente conocidos, como es el caso de su respuesta ante los efectos de una explosión. Este es un campo de especial relevancia, debido a que los eventos, tanto intencionados como accidentales, en los que una estructura se ve sometida a una explosión son, por desgracia, relativamente frecuentes. La solicitación de una estructura ante una explosión se produce por el impacto sobre la misma de la onda de presión generada en la detonación. La aplicación de esta carga sobre la estructura es muy rápida y de muy corta duración. Este tipo de acciones se denominan cargas impulsivas, y pueden ser hasta cuatro órdenes de magnitud más rápidas que las cargas dinámicas impuestas por un terremoto. En consecuencia, no es de extrañar que sus efectos sobre las estructuras y sus materiales sean muy distintos que las que producen las cargas habitualmente consideradas en ingeniería. En la presente tesis doctoral se profundiza en el conocimiento del comportamiento material del hormigón sometido a explosiones. Para ello, es crucial contar con resultados experimentales de estructuras de hormigón sometidas a explosiones. Este tipo de resultados es difícil de encontrar en la literatura científica, ya que estos ensayos han sido tradicionalmente llevados a cabo en el ámbito militar y los resultados obtenidos no son de dominio público. Por otra parte, en las campañas experimentales con explosiones llevadas a cabo por instituciones civiles el elevado coste de acceso a explosivos y a campos de prueba adecuados no permite la realización de ensayos con un elevado número de muestras. Por este motivo, la dispersión experimental no es habitualmente controlada. Sin embargo, en elementos de hormigón armado sometidos a explosiones, la dispersión experimental es muy acusada, en primer lugar, por la propia heterogeneidad del hormigón, y en segundo, por la dificultad inherente a la realización de ensayos con explosiones, por motivos tales como dificultades en las condiciones de contorno, variabilidad del explosivo, o incluso cambios en las condiciones atmosféricas. Para paliar estos inconvenientes, en esta tesis doctoral se ha diseñado un novedoso dispositivo que permite ensayar hasta cuatro losas de hormigón bajo la misma detonación, lo que además de proporcionar un número de muestras estadísticamente representativo, supone un importante ahorro de costes. Con este dispositivo se han ensayado 28 losas de hormigón, tanto armadas como en masa, de dos dosificaciones distintas. Pero además de contar con datos experimentales, también es importante disponer de herramientas de cálculo para el análisis y diseño de estructuras sometidas a explosiones. Aunque existen diversos métodos analíticos, hoy por hoy las técnicas de simulación numérica suponen la alternativa más avanzada y versátil para el cálculo de elementos estructurales sometidos a cargas impulsivas. Sin embargo, para obtener resultados fiables es crucial contar con modelos constitutivos de material que tengan en cuenta los parámetros que gobiernan el comportamiento para el caso de carga en estudio. En este sentido, cabe destacar que la mayoría de los modelos constitutivos desarrollados para el hormigón a altas velocidades de deformación proceden del ámbito balístico, donde dominan las grandes tensiones de compresión en el entorno local de la zona afectada por el impacto. En el caso de los elementos de hormigón sometidos a explosiones, las tensiones de compresión son mucho más moderadas, siendo las tensiones de tracción generalmente las causantes de la rotura del material. En esta tesis doctoral se analiza la validez de algunos de los modelos disponibles, confirmando que los parámetros que gobiernan el fallo de las losas de hormigón armado ante explosiones son la resistencia a tracción y su ablandamiento tras rotura. En base a los resultados anteriores se ha desarrollado un modelo constitutivo para el hormigón ante altas velocidades de deformación, que sólo tiene en cuenta la rotura por tracción. Este modelo parte del de fisura cohesiva embebida con discontinuidad fuerte, desarrollado por Planas y Sancho, que ha demostrado su capacidad en la predicción de la rotura a tracción de elementos de hormigón en masa. El modelo ha sido modificado para su implementación en el programa comercial de integración explícita LS-DYNA, utilizando elementos finitos hexaédricos e incorporando la dependencia de la velocidad de deformación para permitir su utilización en el ámbito dinámico. El modelo es estrictamente local y no requiere de remallado ni conocer previamente la trayectoria de la fisura. Este modelo constitutivo ha sido utilizado para simular dos campañas experimentales, probando la hipótesis de que el fallo de elementos de hormigón ante explosiones está gobernado por el comportamiento a tracción, siendo de especial relevancia el ablandamiento del hormigón. Concrete is nowadays one of the most widely used building materials because of its good mechanical properties, moldability and production economy, among other advantages. As it is known, it has high compressive and low tensile strengths and for this reason it is reinforced with steel bars to form reinforced concrete, a material that has become the most important constructive solution of our time. Despite being such a widely used material, there are some aspects of concrete performance that are not yet fully understood, as it is the case of its response to the effects of an explosion. This is a topic of particular relevance because the events, both intentional and accidental, in which a structure is subjected to an explosion are, unfortunately, relatively common. The loading of a structure due to an explosive event occurs due to the impact of the pressure shock wave generated in the detonation. The application of this load on the structure is very fast and of very short duration. Such actions are called impulsive loads, and can be up to four orders of magnitude faster than the dynamic loads imposed by an earthquake. Consequently, it is not surprising that their effects on structures and materials are very different than those that cause the loads usually considered in engineering. This thesis broadens the knowledge about the material behavior of concrete subjected to explosions. To that end, it is crucial to have experimental results of concrete structures subjected to explosions. These types of results are difficult to find in the scientific literature, as these tests have traditionally been carried out by armies of different countries and the results obtained are classified. Moreover, in experimental campaigns with explosives conducted by civil institutions the high cost of accessing explosives and the lack of proper test fields does not allow for the testing of a large number of samples. For this reason, the experimental scatter is usually not controlled. However, in reinforced concrete elements subjected to explosions the experimental dispersion is very pronounced. First, due to the heterogeneity of concrete, and secondly, because of the difficulty inherent to testing with explosions, for reasons such as difficulties in the boundary conditions, variability of the explosive, or even atmospheric changes. To overcome these drawbacks, in this thesis we have designed a novel device that allows for testing up to four concrete slabs under the same detonation, which apart from providing a statistically representative number of samples, represents a significant saving in costs. A number of 28 slabs were tested using this device. The slabs were both reinforced and plain concrete, and two different concrete mixes were used. Besides having experimental data, it is also important to have computational tools for the analysis and design of structures subjected to explosions. Despite the existence of several analytical methods, numerical simulation techniques nowadays represent the most advanced and versatile alternative for the assessment of structural elements subjected to impulsive loading. However, to obtain reliable results it is crucial to have material constitutive models that take into account the parameters that govern the behavior for the load case under study. In this regard it is noteworthy that most of the developed constitutive models for concrete at high strain rates arise from the ballistic field, dominated by large compressive stresses in the local environment of the area affected by the impact. In the case of concrete elements subjected to an explosion, the compressive stresses are much more moderate, while tensile stresses usually cause material failure. This thesis discusses the validity of some of the available models, confirming that the parameters governing the failure of reinforced concrete slabs subjected to blast are the tensile strength and softening behaviour after failure. Based on these results we have developed a constitutive model for concrete at high strain rates, which only takes into account the ultimate tensile strength. This model is based on the embedded Cohesive Crack Model with Strong Discontinuity Approach developed by Planas and Sancho, which has proved its ability in predicting the tensile fracture of plain concrete elements. The model has been modified for its implementation in the commercial explicit integration program LS-DYNA, using hexahedral finite elements and incorporating the dependence of the strain rate, to allow for its use in dynamic domain. The model is strictly local and does not require remeshing nor prior knowledge of the crack path. This constitutive model has been used to simulate two experimental campaigns, confirming the hypothesis that the failure of concrete elements subjected to explosions is governed by their tensile response, being of particular relevance the softening behavior of concrete.
Resumo:
Critical infrastructures support everyday activities in modern societies, facilitating the exchange of services and quantities of various nature. Their functioning is the result of the integration of diverse technologies, systems and organizations into a complex network of interconnections. Benefits from networking are accompanied by new threats and risks. In particular, because of the increased interdependency, disturbances and failures may propagate and render unstable the whole infrastructure network. This paper presents a methodology of resilience analysis of networked systems of systems. Resilience generalizes the concept of stability of a system around a state of equilibrium, with respect to a disturbance and its ability of preventing, resisting and recovery. The methodology provides a tool for the analysis of off-equilibrium conditions that may occur in a single system and propagate through the network of dependencies. The analysis is conducted in two stages. The first stage of the analysis is qualitative. It identifies the resilience scenarios, i.e. the sequence of events, triggered by an initial disturbance, which include failures and the system response. The second stage is quantitative. The most critical scenarios can be simulated, for the desired parameter settings, in order to check if they are successfully handled, i.e recovered to nominal conditions, or they end into the network failure. The proposed methodology aims at providing an effective support to resilience-informed design.
Resumo:
The road to the automation of the agricultural processes passes through the safe operation of the autonomous vehicles. This requirement is a fact in ground mobile units, but it still has not well defined for the aerial robots (UAVs) mainly because the normative and legislation are quite diffuse or even inexistent. Therefore, to define a common and global policy is the challenge to tackle. This characterization has to be addressed from the field experience. Accordingly, this paper presents the work done in this direction, based on the analysis of the most common sources of hazards when using UAV's for agricultural tasks. The work, based on the ISO 31000 normative, has been carried out by applying a three-step structure that integrates the identification, assessment and reduction procedures. The present paper exposes how this method has been applied to analyze previous accidents and malfunctions during UAV operations in order to obtain real failure causes. It has allowed highlighting common risks and hazardous sources and proposing specific guards and safety measures for the agricultural context.