919 resultados para Poisson Mixed Model
Resumo:
Background: This study examined the daily surgical scheduling problem in a teaching hospital. This problem relates to the use of multiple operating rooms and different types of surgeons in a typical surgical day with deterministic operation durations (preincision, incision, and postincision times). Teaching hospitals play a key role in the health-care system; however, existing models assume that the duration of surgery is independent of the surgeon's skills. This problem has not been properly addressed in other studies. We analyze the case of a Spanish public hospital, in which continuous pressures and budgeting reductions entail the more efficient use of resources. Methods: To obtain an optimal solution for this problem, we developed a mixed-integer programming model and user-friendly interface that facilitate the scheduling of planned operations for the following surgical day. We also implemented a simulation model to assist the evaluation of different dispatching policies for surgeries and surgeons. The typical aspects we took into account were the type of surgeon, potential overtime, idling time of surgeons, and the use of operating rooms. Results: It is necessary to consider the expertise of a given surgeon when formulating a schedule: such skill can decrease the probability of delays that could affect subsequent surgeries or cause cancellation of the final surgery. We obtained optimal solutions for a set of given instances, which we obtained through surgical information related to acceptable times collected from a Spanish public hospital. Conclusions: We developed a computer-aided framework with a user-friendly interface for use by a surgical manager that presents a 3-D simulation of the problem. Additionally, we obtained an efficient formulation for this complex problem. However, the spread of this kind of operation research in Spanish public health hospitals will take a long time since there is a lack of knowledge of the beneficial techniques and possibilities that operational research can offer for the health-care system.
Resumo:
La presente investigación tiene como objetivo principal diseñar un Modelo de Gestión de Riesgos Operacionales (MGRO) según las Directrices de los Acuerdos II y III del Comité de Supervisión Bancaria de Basilea del Banco de Pagos Internacionales (CSBB-BPI). Se considera importante realizar un estudio sobre este tema dado que son los riesgos operacionales (OpR) los responsables en gran medida de las últimas crisis financieras mundiales y por la dificultad para detectarlos en las organizaciones. Se ha planteado un modelo de gestión subdividido en dos vías de influencias. La primera acoge el paradigma holístico en el que se considera que hay múltiples maneras de percibir un proceso cíclico, así como las herramientas para observar, conocer y entender el objeto o sujeto percibido. La segunda vía la representa el paradigma totalizante, en el que se obtienen datos tanto cualitativos como cuantitativos, los cuales son complementarios entre si. Por otra parte, este trabajo plantea el diseño de un programa informático de OpR Cualitativo, que ha sido diseñado para determinar la raíz de los riesgos en las organizaciones y su Valor en Riesgo Operacional (OpVaR) basado en el método del indicador básico. Aplicando el ciclo holístico al caso de estudio, se obtuvo el siguiente diseño de investigación: no experimental, univariable, transversal descriptiva, contemporánea, retrospectiva, de fuente mixta, cualitativa (fenomenológica y etnográfica) y cuantitativa (descriptiva y analítica). La toma de decisiones y recolección de información se realizó en dos fases en la unidad de estudio. En la primera se tomó en cuenta la totalidad de la empresa Corpoelec-EDELCA, en la que se presentó un universo estadístico de 4271 personas, una población de 2390 personas y una unidad de muestreo de 87 personas. Se repitió el proceso en una segunda fase, para la Central Hidroeléctrica Simón Bolívar, y se determinó un segundo universo estadístico de 300 trabajadores, una población de 191 personas y una muestra de 58 profesionales. Como fuentes de recolección de información se utilizaron fuentes primarias y secundarias. Para recabar la información primaria se realizaron observaciones directas, dos encuestas para detectar las áreas y procesos con mayor nivel de riesgos y se diseñó un cuestionario combinado con otra encuesta (ad hoc) para establecer las estimaciones de frecuencia y severidad de pérdidas operacionales. La información de fuentes secundarias se extrajo de las bases de datos de Corpoelec-EDELCA, de la IEA, del Banco Mundial, del CSBB-BPI, de la UPM y de la UC at Berkeley, entre otras. Se establecieron las distribuciones de frecuencia y de severidad de pérdidas operacionales como las variables independientes y el OpVaR como la variable dependiente. No se realizó ningún tipo de seguimiento o control a las variables bajo análisis, ya que se consideraron estas para un instante especifico y solo se determinan con la finalidad de establecer la existencia y valoración puntual de los OpR en la unidad de estudio. El análisis cualitativo planteado en el MGRO, permitió detectar que en la unidad de investigación, el 67% de los OpR detectados provienen de dos fuentes principales: procesos (32%) y eventos externos (35%). Adicionalmente, la validación del MGRO en Corpoelec-EDELCA, permitió detectar que el 63% de los OpR en la organización provienen de tres categorías principales, siendo los fraudes externos los presentes con mayor regularidad y severidad de pérdidas en la organización. La exposición al riesgo se determinó fundamentándose en la adaptación del concepto de OpVaR que generalmente se utiliza para series temporales y que en el caso de estudio presenta la primicia de aplicarlo a datos cualitativos transformados con la escala Likert. La posibilidad de utilizar distribuciones de probabilidad típicas para datos cuantitativos en distribuciones de frecuencia y severidad de pérdidas con datos de origen cualitativo fueron analizadas. Para el 64% de los OpR estudiados se obtuvo que la frecuencia tiene un comportamiento semejante al de la distribución de probabilidad de Poisson y en un 55% de los casos para la severidad de pérdidas se obtuvo a las log-normal como las distribuciones de probabilidad más comunes, con lo que se concluyó que los enfoques sugeridos por el BCBS-BIS para series de tiempo son aplicables a los datos cualitativos. Obtenidas las distribuciones de frecuencia y severidad de pérdidas, se convolucionaron estas implementando el método de Montecarlo, con lo que se obtuvieron los enfoques de distribuciones de pérdidas (LDA) para cada uno de los OpR. El OpVaR se dedujo como lo sugiere el CSBB-BPI del percentil 99,9 o 99% de cada una de las LDA, obteniéndose que los OpR presentan un comportamiento similar al sistema financiero, resultando como los de mayor peligrosidad los que se ubican con baja frecuencia y alto impacto, por su dificultad para ser detectados y monitoreados. Finalmente, se considera que el MGRO permitirá a los agentes del mercado y sus grupos de interés conocer con efectividad, fiabilidad y eficiencia el status de sus entidades, lo que reducirá la incertidumbre de sus inversiones y les permitirá establecer una nueva cultura de gestión en sus organizaciones. ABSTRACT This research has as main objective the design of a Model for Operational Risk Management (MORM) according to the guidelines of Accords II and III of the Basel Committee on Banking Supervision of the Bank for International Settlements (BCBS- BIS). It is considered important to conduct a study on this issue since operational risks (OpR) are largely responsible for the recent world financial crisis and due to the difficulty in detecting them in organizations. A management model has been designed which is divided into two way of influences. The first supports the holistic paradigm in which it is considered that there are multiple ways of perceiving a cyclical process and contains the tools to observe, know and understand the subject or object perceived. The second way is the totalizing paradigm, in which both qualitative and quantitative data are obtained, which are complementary to each other. Moreover, this paper presents the design of qualitative OpR software which is designed to determine the root of risks in organizations and their Operational Value at Risk (OpVaR) based on the basic indicator approach. Applying the holistic cycle to the case study, the following research design was obtained: non- experimental, univariate, descriptive cross-sectional, contemporary, retrospective, mixed-source, qualitative (phenomenological and ethnographic) and quantitative (descriptive and analytical). Decision making and data collection was conducted in two phases in the study unit. The first took into account the totality of the Corpoelec-EDELCA company, which presented a statistical universe of 4271 individuals, a population of 2390 individuals and a sampling unit of 87 individuals. The process was repeated in a second phase to the Simon Bolivar Hydroelectric Power Plant, and a second statistical universe of 300 workers, a population of 191 people and a sample of 58 professionals was determined. As sources of information gathering primary and secondary sources were used. To obtain the primary information direct observations were conducted and two surveys to identify the areas and processes with higher risks were designed. A questionnaire was combined with an ad hoc survey to establish estimates of frequency and severity of operational losses was also considered. The secondary information was extracted from the databases of Corpoelec-EDELCA, IEA, the World Bank, the BCBS-BIS, UPM and UC at Berkeley, among others. The operational loss frequency distributions and the operational loss severity distributions were established as the independent variables and OpVaR as the dependent variable. No monitoring or control of the variables under analysis was performed, as these were considered for a specific time and are determined only for the purpose of establishing the existence and timely assessment of the OpR in the study unit. Qualitative analysis raised in the MORM made it possible to detect that in the research unit, 67% of detected OpR come from two main sources: external processes (32%) and external events (35%). Additionally, validation of the MORM in Corpoelec-EDELCA, enabled to estimate that 63% of OpR in the organization come from three main categories, with external fraud being present more regularly and greater severity of losses in the organization. Risk exposure is determined basing on adapting the concept of OpVaR generally used for time series and in the case study it presents the advantage of applying it to qualitative data transformed with the Likert scale. The possibility of using typical probability distributions for quantitative data in loss frequency and loss severity distributions with data of qualitative origin were analyzed. For the 64% of OpR studied it was found that the frequency has a similar behavior to that of the Poisson probability distribution and 55% of the cases for loss severity it was found that the log-normal were the most common probability distributions. It was concluded that the approach suggested by the BCBS-BIS for time series can be applied to qualitative data. Once obtained the distributions of loss frequency and severity have been obtained they were subjected to convolution implementing the Monte Carlo method. Thus the loss distribution approaches (LDA) were obtained for each of the OpR. The OpVaR was derived as suggested by the BCBS-BIS 99.9 percentile or 99% of each of the LDA. It was determined that the OpR exhibits a similar behavior to the financial system, being the most dangerous those with low frequency and high impact for their difficulty in being detected and monitored. Finally, it is considered that the MORM will allows market players and their stakeholders to know with effectiveness, efficiency and reliability the status of their entities, which will reduce the uncertainty of their investments and enable them to establish a new management culture in their organizations.
Resumo:
Los montes Mediterráneos han experimentado múltiples cambios en las últimas décadas (tanto en clima como en usos), lo que ha conducido a variaciones en la distribución de especies. El aumento previsto de las temperaturas medias junto con la mayor variabilidad intra e inter anual en cuanto a la ocurrencia de eventos extremos o disturbios naturales (como periodos prolongados de sequía, olas de frío o calor, incendios forestales o vendavales) pueden dañar significativamente al regenerado, llevándolo hasta la muerte, y jugando un papel decisivo en la composición de especies y en la dinámica del monte. La amplitud ecológica de muchas especies forestales puede verse afectada, de forma que se esperan cambios en sus nichos actuales de regeneración. Sin embargo, la migración latitudinal de las especies en busca de mejores condiciones, podría ser una explicación demasiado simplista de un proceso mucho más complejo de interacción entre la temperatura y la precipitación, que afectaría a cada especie de un modo distinto. En este sentido tanto la capacidad de adaptación al estrés ambiental de una determinada especie, así como su habilidad para competir por los recursos limitados, podría significar variaciones dentro de una comunidad. Las características fisiológicas y morfológicas propias de cada especie se encuentran fuertemente relacionadas con el lugar donde cada una puede surgir, qué especies pueden convivir y como éstas responden a las condiciones ambientales. En este sentido, el conocimiento sobre las distintas respuestas ecofisiológicas observadas ante cambios ambientales puede ser fundamentales para la predicción de variaciones en la distribución de especies, composición de la comunidad y productividad del monte ante el cambio global. En esta tesis investigamos el grado de tolerancia y sensibilidad que cada una de las tres especies de estudio, coexistentes en el interior peninsular ibérico (Pinus pinea, Quercus ilex y Juniperus oxycedrus), muestra ante los factores abióticos de estrés típicos de la región Mediterránea. Nuestro trabajo se ha basado en la definición del nicho óptimo fisiológico para el regenerado de cada especie a través de la investigación en profundidad del efecto de la sequía, la temperatura y el ambiente lumínico. Para ello, hemos desarrollado un modelo de predicción de la tasa de asimilación de carbono que nos ha permitido identificar las condiciones óptimas ambientales donde el regenerado de cada especie podría establecerse con mayor facilidad. En apoyo a este trabajo y con la idea de estudiar el efecto de la sequía a nivel de toda la planta hemos desarrollado un experimento paralelo en invernadero. Aquí se han aplicado dos regímenes hídricos para estudiar las características fisiológicas y morfológicas de cada especie, sobre todo a nivel de raíz y crecimiento del tallo, y relacionarlas con las diferentes estrategias en el uso del agua de las especies. Por último, hemos estudiado los patrones de aclimatación y desaclimatación al frio de cada especie, identificando los periodos de sensibilidad a heladas, así como cuellos de botella donde la competencia entre especies podría surgir. A pesar de que el pino piñonero ha sido la especie objeto de la gestión de estas masas durante siglos, actualmente se encuentra en la posición más desfavorable para combatir el cambio global, presentado el nicho fisiológico más estrecho de las tres especies. La encina sin embargo, ha resultado ser la especie mejor cualificada para afrontar este cambio, seguida muy de cerca por el enebro. Nuestros resultados sugieren una posible expansión en el rango de distribución de la encina, un aumento en la presencia del enebro y una disminución progresiva del pino piñonero a medio plazo en estas masas. ABSTRACT Mediterranean forests have undergone multiple changes over the last decades (in both climate and land use), which have lead to variations in the distribution of species. The expected increase in mean annual temperature together with the greater inter and intra-annual variability in extreme events and disturbances occurrence (such as prolonged drought periods, cold or heat waves, wildfires or strong winds) can significantly damage natural regeneration, up to causing death, playing a decisive role on species composition and forest dynamics. The ecological amplitude for adaptation of many species can be affected in such a way that changes in the current regeneration niches of many species are expected. However, the forecasted poleward migration of species seeking better conditions could be an oversimplification of what is a more complex phenomenon of interactions among temperature and precipitation, that would affect different species in different ways. In this regard, either the ability to adapt to environmental stresses or to compete for limited resources of a single species in a mixed forest could lead to variations within a community. The ecophysiological and morphological traits specific to each species are strongly related to the place where each species can emerge, which species can coexist, and how they respond to environmental conditions. In this regard, the understanding of the ecophysiological responses observed against changes in environmental conditions can be essential for predicting variations in species distribution, community composition, and forest productivity in the context of global change. In this thesis we investigated the degree of tolerance and sensitivity that each of the three studied species, co-occurring in central of the Iberian Peninsula (Pinus pinea, Quercus ilex and Juniperus oxycedrus), show against the typical abiotic stress factors in the Mediterranean region. Our work is based on the optimal physiological niche for regeneration of each species through in-depth research on the effect of drought, temperature and light environment. For this purpose, we developed a model to predict the carbon assimilation rate which allows us to identify the optimal environmental conditions where regeneration from each species could establish itself more easily. To obtain a better understanding about the effect of low temperature on regeneration, we studied the acclimation and deacclimation patterns to cold of each species, identifying period of frost sensitivity, as well as bottlenecks where competition between species can arise. Finally, to support our results about the effect of water availabilty, we conducted a greenhouse experiment with a view of studying the drought effect at the whole plant level. Here, two watering regimes were applied in order to study the physiological and morphological traits of each species, mainly at the level of the root system and stem growth, and so relate them to the different water use strategies of the species. Despite the fact that stone pine has been the target species for centuries, nowadays this species is in the most unfavorable position to cope with climate change. Holm oak, however, resulted the species that is best adapted to tolerate the predicted changes, followed closely by prickly juniper. Our results suggest a feasible expansion of the distribution range in holm oak, an increase in the prickly juniper presence and a progressive decreasing of stone pine presence in the medium term in these stone pine-holm oak-prickly juniper mixed forests.
Resumo:
Los sistemas empotrados son cada día más comunes y complejos, de modo que encontrar procesos seguros, eficaces y baratos de desarrollo software dirigidos específicamente a esta clase de sistemas es más necesario que nunca. A diferencia de lo que ocurría hasta hace poco, en la actualidad los avances tecnológicos en el campo de los microprocesadores de los últimos tiempos permiten el desarrollo de equipos con prestaciones más que suficientes para ejecutar varios sistemas software en una única máquina. Además, hay sistemas empotrados con requisitos de seguridad (safety) de cuyo correcto funcionamiento depende la vida de muchas personas y/o grandes inversiones económicas. Estos sistemas software se diseñan e implementan de acuerdo con unos estándares de desarrollo software muy estrictos y exigentes. En algunos casos puede ser necesaria también la certificación del software. Para estos casos, los sistemas con criticidades mixtas pueden ser una alternativa muy valiosa. En esta clase de sistemas, aplicaciones con diferentes niveles de criticidad se ejecutan en el mismo computador. Sin embargo, a menudo es necesario certificar el sistema entero con el nivel de criticidad de la aplicación más crítica, lo que hace que los costes se disparen. La virtualización se ha postulado como una tecnología muy interesante para contener esos costes. Esta tecnología permite que un conjunto de máquinas virtuales o particiones ejecuten las aplicaciones con unos niveles de aislamiento tanto temporal como espacial muy altos. Esto, a su vez, permite que cada partición pueda ser certificada independientemente. Para el desarrollo de sistemas particionados con criticidades mixtas se necesita actualizar los modelos de desarrollo software tradicionales, pues estos no cubren ni las nuevas actividades ni los nuevos roles que se requieren en el desarrollo de estos sistemas. Por ejemplo, el integrador del sistema debe definir las particiones o el desarrollador de aplicaciones debe tener en cuenta las características de la partición donde su aplicación va a ejecutar. Tradicionalmente, en el desarrollo de sistemas empotrados, el modelo en V ha tenido una especial relevancia. Por ello, este modelo ha sido adaptado para tener en cuenta escenarios tales como el desarrollo en paralelo de aplicaciones o la incorporación de una nueva partición a un sistema ya existente. El objetivo de esta tesis doctoral es mejorar la tecnología actual de desarrollo de sistemas particionados con criticidades mixtas. Para ello, se ha diseñado e implementado un entorno dirigido específicamente a facilitar y mejorar los procesos de desarrollo de esta clase de sistemas. En concreto, se ha creado un algoritmo que genera el particionado del sistema automáticamente. En el entorno de desarrollo propuesto, se han integrado todas las actividades necesarias para desarrollo de un sistema particionado, incluidos los nuevos roles y actividades mencionados anteriormente. Además, el diseño del entorno de desarrollo se ha basado en la ingeniería guiada por modelos (Model-Driven Engineering), la cual promueve el uso de los modelos como elementos fundamentales en el proceso de desarrollo. Así pues, se proporcionan las herramientas necesarias para modelar y particionar el sistema, así como para validar los resultados y generar los artefactos necesarios para el compilado, construcción y despliegue del mismo. Además, en el diseño del entorno de desarrollo, la extensión e integración del mismo con herramientas de validación ha sido un factor clave. En concreto, se pueden incorporar al entorno de desarrollo nuevos requisitos no-funcionales, la generación de nuevos artefactos tales como documentación o diferentes lenguajes de programación, etc. Una parte clave del entorno de desarrollo es el algoritmo de particionado. Este algoritmo se ha diseñado para ser independiente de los requisitos de las aplicaciones así como para permitir al integrador del sistema implementar nuevos requisitos del sistema. Para lograr esta independencia, se han definido las restricciones al particionado. El algoritmo garantiza que dichas restricciones se cumplirán en el sistema particionado que resulte de su ejecución. Las restricciones al particionado se han diseñado con una capacidad expresiva suficiente para que, con un pequeño grupo de ellas, se puedan expresar la mayor parte de los requisitos no-funcionales más comunes. Las restricciones pueden ser definidas manualmente por el integrador del sistema o bien pueden ser generadas automáticamente por una herramienta a partir de los requisitos funcionales y no-funcionales de una aplicación. El algoritmo de particionado toma como entradas los modelos y las restricciones al particionado del sistema. Tras la ejecución y como resultado, se genera un modelo de despliegue en el que se definen las particiones que son necesarias para el particionado del sistema. A su vez, cada partición define qué aplicaciones deben ejecutar en ella así como los recursos que necesita la partición para ejecutar correctamente. El problema del particionado y las restricciones al particionado se modelan matemáticamente a través de grafos coloreados. En dichos grafos, un coloreado propio de los vértices representa un particionado del sistema correcto. El algoritmo se ha diseñado también para que, si es necesario, sea posible obtener particionados alternativos al inicialmente propuesto. El entorno de desarrollo, incluyendo el algoritmo de particionado, se ha probado con éxito en dos casos de uso industriales: el satélite UPMSat-2 y un demostrador del sistema de control de una turbina eólica. Además, el algoritmo se ha validado mediante la ejecución de numerosos escenarios sintéticos, incluyendo algunos muy complejos, de más de 500 aplicaciones. ABSTRACT The importance of embedded software is growing as it is required for a large number of systems. Devising cheap, efficient and reliable development processes for embedded systems is thus a notable challenge nowadays. Computer processing power is continuously increasing, and as a result, it is currently possible to integrate complex systems in a single processor, which was not feasible a few years ago.Embedded systems may have safety critical requirements. Its failure may result in personal or substantial economical loss. The development of these systems requires stringent development processes that are usually defined by suitable standards. In some cases their certification is also necessary. This scenario fosters the use of mixed-criticality systems in which applications of different criticality levels must coexist in a single system. In these cases, it is usually necessary to certify the whole system, including non-critical applications, which is costly. Virtualization emerges as an enabling technology used for dealing with this problem. The system is structured as a set of partitions, or virtual machines, that can be executed with temporal and spatial isolation. In this way, applications can be developed and certified independently. The development of MCPS (Mixed-Criticality Partitioned Systems) requires additional roles and activities that traditional systems do not require. The system integrator has to define system partitions. Application development has to consider the characteristics of the partition to which it is allocated. In addition, traditional software process models have to be adapted to this scenario. The V-model is commonly used in embedded systems development. It can be adapted to the development of MCPS by enabling the parallel development of applications or adding an additional partition to an existing system. The objective of this PhD is to improve the available technology for MCPS development by providing a framework tailored to the development of this type of system and by defining a flexible and efficient algorithm for automatically generating system partitionings. The goal of the framework is to integrate all the activities required for developing MCPS and to support the different roles involved in this process. The framework is based on MDE (Model-Driven Engineering), which emphasizes the use of models in the development process. The framework provides basic means for modeling the system, generating system partitions, validating the system and generating final artifacts. The framework has been designed to facilitate its extension and the integration of external validation tools. In particular, it can be extended by adding support for additional non-functional requirements and support for final artifacts, such as new programming languages or additional documentation. The framework includes a novel partitioning algorithm. It has been designed to be independent of the types of applications requirements and also to enable the system integrator to tailor the partitioning to the specific requirements of a system. This independence is achieved by defining partitioning constraints that must be met by the resulting partitioning. They have sufficient expressive capacity to state the most common constraints and can be defined manually by the system integrator or generated automatically based on functional and non-functional requirements of the applications. The partitioning algorithm uses system models and partitioning constraints as its inputs. It generates a deployment model that is composed by a set of partitions. Each partition is in turn composed of a set of allocated applications and assigned resources. The partitioning problem, including applications and constraints, is modeled as a colored graph. A valid partitioning is a proper vertex coloring. A specially designed algorithm generates this coloring and is able to provide alternative partitions if required. The framework, including the partitioning algorithm, has been successfully used in the development of two industrial use cases: the UPMSat-2 satellite and the control system of a wind-power turbine. The partitioning algorithm has been successfully validated by using a large number of synthetic loads, including complex scenarios with more that 500 applications.
Resumo:
Las transformaciones martensíticas (MT) se definen como un cambio en la estructura del cristal para formar una fase coherente o estructuras de dominio multivariante, a partir de la fase inicial con la misma composición, debido a pequeños intercambios o movimientos atómicos cooperativos. En el siglo pasado se han descubierto MT en diferentes materiales partiendo desde los aceros hasta las aleaciones con memoria de forma, materiales cerámicos y materiales inteligentes. Todos muestran propiedades destacables como alta resistencia mecánica, memoria de forma, efectos de superelasticidad o funcionalidades ferroicas como la piezoelectricidad, electro y magneto-estricción etc. Varios modelos/teorías se han desarrollado en sinergia con el desarrollo de la física del estado sólido para entender por qué las MT generan microstructuras muy variadas y ricas que muestran propiedades muy interesantes. Entre las teorías mejor aceptadas se encuentra la Teoría Fenomenológica de la Cristalografía Martensítica (PTMC, por sus siglas en inglés) que predice el plano de hábito y las relaciones de orientación entre la austenita y la martensita. La reinterpretación de la teoría PTMC en un entorno de mecánica del continuo (CM-PTMC) explica la formación de los dominios de estructuras multivariantes, mientras que la teoría de Landau con dinámica de inercia desentraña los mecanismos físicos de los precursores y otros comportamientos dinámicos. La dinámica de red cristalina desvela la reducción de la dureza acústica de las ondas de tensión de red que da lugar a transformaciones débiles de primer orden en el desplazamiento. A pesar de las diferencias entre las teorías estáticas y dinámicas dado su origen en diversas ramas de la física (por ejemplo mecánica continua o dinámica de la red cristalina), estas teorías deben estar inherentemente conectadas entre sí y mostrar ciertos elementos en común en una perspectiva unificada de la física. No obstante las conexiones físicas y diferencias entre las teorías/modelos no se han tratado hasta la fecha, aun siendo de importancia crítica para la mejora de modelos de MT y para el desarrollo integrado de modelos de transformaciones acopladas de desplazamiento-difusión. Por lo tanto, esta tesis comenzó con dos objetivos claros. El primero fue encontrar las conexiones físicas y las diferencias entre los modelos de MT mediante un análisis teórico detallado y simulaciones numéricas. El segundo objetivo fue expandir el modelo de Landau para ser capaz de estudiar MT en policristales, en el caso de transformaciones acopladas de desplazamiento-difusión, y en presencia de dislocaciones. Comenzando con un resumen de los antecedente, en este trabajo se presentan las bases físicas de los modelos actuales de MT. Su capacidad para predecir MT se clarifica mediante el ansis teórico y las simulaciones de la evolución microstructural de MT de cúbicoatetragonal y cúbicoatrigonal en 3D. Este análisis revela que el modelo de Landau con representación irreducible de la deformación transformada es equivalente a la teoría CM-PTMC y al modelo de microelasticidad para predecir los rasgos estáticos durante la MT, pero proporciona una mejor interpretación de los comportamientos dinámicos. Sin embargo, las aplicaciones del modelo de Landau en materiales estructurales están limitadas por su complejidad. Por tanto, el primer resultado de esta tesis es el desarrollo del modelo de Landau nolineal con representación irreducible de deformaciones y de la dinámica de inercia para policristales. La simulación demuestra que el modelo propuesto es consistente fcamente con el CM-PTMC en la descripción estática, y también permite una predicción del diagrama de fases con la clásica forma ’en C’ de los modos de nucleación martensítica activados por la combinación de temperaturas de enfriamiento y las condiciones de tensión aplicada correlacionadas con la transformación de energía de Landau. Posteriomente, el modelo de Landau de MT es integrado con un modelo de transformación de difusión cuantitativa para elucidar la relajación atómica y la difusión de corto alcance de los elementos durante la MT en acero. El modelo de transformaciones de desplazamiento y difusión incluye los efectos de la relajación en borde de grano para la nucleación heterogenea y la evolución espacio-temporal de potenciales de difusión y movilidades químicas mediante el acoplamiento de herramientas de cálculo y bases de datos termo-cinéticos de tipo CALPHAD. El modelo se aplica para estudiar la evolución microstructural de aceros al carbono policristalinos procesados por enfriamiento y partición (Q&P) en 2D. La microstructura y la composición obtenida mediante la simulación se comparan con los datos experimentales disponibles. Los resultados muestran el importante papel jugado por las diferencias en movilidad de difusión entre la fase austenita y martensita en la distibución de carbono en las aceros. Finalmente, un modelo multi-campo es propuesto mediante la incorporación del modelo de dislocación en grano-grueso al modelo desarrollado de Landau para incluir las diferencias morfológicas entre aceros y aleaciones con memoria de forma con la misma ruptura de simetría. La nucleación de dislocaciones, la formación de la martensita ’butterfly’, y la redistribución del carbono después del revenido son bien representadas en las simulaciones 2D del estudio de la evolución de la microstructura en aceros representativos. Con dicha simulación demostramos que incluyendo las dislocaciones obtenemos para dichos aceros, una buena comparación frente a los datos experimentales de la morfología de los bordes de macla, la existencia de austenita retenida dentro de la martensita, etc. Por tanto, basado en un modelo integral y en el desarrollo de códigos durante esta tesis, se ha creado una herramienta de modelización multiescala y multi-campo. Dicha herramienta acopla la termodinámica y la mecánica del continuo en la macroescala con la cinética de difusión y los modelos de campo de fase/Landau en la mesoescala, y también incluye los principios de la cristalografía y de la dinámica de red cristalina en la microescala. ABSTRACT Martensitic transformation (MT), in a narrow sense, is defined as the change of the crystal structure to form a coherent phase, or multi-variant domain structures out from a parent phase with the same composition, by small shuffles or co-operative movements of atoms. Over the past century, MTs have been discovered in different materials from steels to shape memory alloys, ceramics, and smart materials. They lead to remarkable properties such as high strength, shape memory/superelasticity effects or ferroic functionalities including piezoelectricity, electro- and magneto-striction, etc. Various theories/models have been developed, in synergy with development of solid state physics, to understand why MT can generate these rich microstructures and give rise to intriguing properties. Among the well-established theories, the Phenomenological Theory of Martensitic Crystallography (PTMC) is able to predict the habit plane and the orientation relationship between austenite and martensite. The re-interpretation of the PTMC theory within a continuum mechanics framework (CM-PTMC) explains the formation of the multivariant domain structures, while the Landau theory with inertial dynamics unravels the physical origins of precursors and other dynamic behaviors. The crystal lattice dynamics unveils the acoustic softening of the lattice strain waves leading to the weak first-order displacive transformation, etc. Though differing in statics or dynamics due to their origins in different branches of physics (e.g. continuum mechanics or crystal lattice dynamics), these theories should be inherently connected with each other and show certain elements in common within a unified perspective of physics. However, the physical connections and distinctions among the theories/models have not been addressed yet, although they are critical to further improving the models of MTs and to develop integrated models for more complex displacivediffusive coupled transformations. Therefore, this thesis started with two objectives. The first one was to reveal the physical connections and distinctions among the models of MT by means of detailed theoretical analyses and numerical simulations. The second objective was to expand the Landau model to be able to study MTs in polycrystals, in the case of displacive-diffusive coupled transformations, and in the presence of the dislocations. Starting with a comprehensive review, the physical kernels of the current models of MTs are presented. Their ability to predict MTs is clarified by means of theoretical analyses and simulations of the microstructure evolution of cubic-to-tetragonal and cubic-to-trigonal MTs in 3D. This analysis reveals that the Landau model with irreducible representation of the transformed strain is equivalent to the CM-PTMC theory and microelasticity model to predict the static features during MTs but provides better interpretation of the dynamic behaviors. However, the applications of the Landau model in structural materials are limited due its the complexity. Thus, the first result of this thesis is the development of a nonlinear Landau model with irreducible representation of strains and the inertial dynamics for polycrystals. The simulation demonstrates that the updated model is physically consistent with the CM-PTMC in statics, and also permits a prediction of a classical ’C shaped’ phase diagram of martensitic nucleation modes activated by the combination of quenching temperature and applied stress conditions interplaying with Landau transformation energy. Next, the Landau model of MT is further integrated with a quantitative diffusional transformation model to elucidate atomic relaxation and short range diffusion of elements during the MT in steel. The model for displacive-diffusive transformations includes the effects of grain boundary relaxation for heterogeneous nucleation and the spatio-temporal evolution of diffusion potentials and chemical mobility by means of coupling with a CALPHAD-type thermo-kinetic calculation engine and database. The model is applied to study for the microstructure evolution of polycrystalline carbon steels processed by the Quenching and Partitioning (Q&P) process in 2D. The simulated mixed microstructure and composition distribution are compared with available experimental data. The results show that the important role played by the differences in diffusion mobility between austenite and martensite to the partitioning in carbon steels. Finally, a multi-field model is proposed by incorporating the coarse-grained dislocation model to the developed Landau model to account for the morphological difference between steels and shape memory alloys with same symmetry breaking. The dislocation nucleation, the formation of the ’butterfly’ martensite, and the redistribution of carbon after tempering are well represented in the 2D simulations for the microstructure evolution of the representative steels. With the simulation, we demonstrate that the dislocations account for the experimental observation of rough twin boundaries, retained austenite within martensite, etc. in steels. Thus, based on the integrated model and the in-house codes developed in thesis, a preliminary multi-field, multiscale modeling tool is built up. The new tool couples thermodynamics and continuum mechanics at the macroscale with diffusion kinetics and phase field/Landau model at the mesoscale, and also includes the essentials of crystallography and crystal lattice dynamics at microscale.
Resumo:
This study explored the utility of the impact response surface (IRS) approach for investigating model ensemble crop yield responses under a large range of changes in climate. IRSs of spring and winter wheat Triticum aestivum yields were constructed from a 26-member ensemble of process-based crop simulation models for sites in Finland, Germany and Spain across a latitudinal transect. The sensitivity of modelled yield to systematic increments of changes in temperature (-2 to +9°C) and precipitation (-50 to +50%) was tested by modifying values of baseline (1981 to 2010) daily weather, with CO2 concentration fixed at 360 ppm. The IRS approach offers an effective method of portraying model behaviour under changing climate as well as advantages for analysing, comparing and presenting results from multi-model ensemble simulations. Though individual model behaviour occasionally departed markedly from the average, ensemble median responses across sites and crop varieties indicated that yields decline with higher temperatures and decreased precipitation and increase with higher precipitation. Across the uncertainty ranges defined for the IRSs, yields were more sensitive to temperature than precipitation changes at the Finnish site while sensitivities were mixed at the German and Spanish sites. Precipitation effects diminished under higher temperature changes. While the bivariate and multi-model characteristics of the analysis impose some limits to interpretation, the IRS approach nonetheless provides additional insights into sensitivities to inter-model and inter-annual variability. Taken together, these sensitivities may help to pinpoint processes such as heat stress, vernalisation or drought effects requiring refinement in future model development.
Resumo:
The kinetics of amyloid fibril formation by beta-amyloid peptide (Abeta) are typical of a nucleation-dependent polymerization mechanism. This type of mechanism suggests that the study of the interaction of Abeta with itself can provide some valuable insights into Alzheimer disease amyloidosis. Interaction of Abeta with itself was explored with the yeast two-hybrid system. Fusion proteins were created by linking the Abeta fragment to a LexA DNA-binding domain (bait) and also to a B42 transactivation domain (prey). Protein-protein interactions were measured by expression of these fusion proteins in Saccharomyces cerevisiae harboring lacZ (beta-galactosidase) and LEU2 (leucine utilization) genes under the control of LexA-dependent operators. This approach suggests that the Abeta molecule is capable of interacting with itself in vivo in the yeast cell nucleus. LexA protein fused to the Drosophila protein bicoid (LexA-bicoid) failed to interact with the B42 fragment fused to Abeta, indicating that the observed Abeta-Abeta interaction was specific. Specificity was further shown by the finding that no significant interaction was observed in yeast expressing LexA-Abeta bait when the B42 transactivation domain was fused to an Abeta fragment with Phe-Phe at residues 19 and 20 replaced by Thr-Thr (AbetaTT), a finding that is consistent with in vitro observations made by others. Moreover, when a peptide fragment bearing this substitution was mixed with native Abeta-(1-40), it inhibited formation of fibrils in vitro as examined by electron microscopy. The findings presented in this paper suggest that the two-hybrid system can be used to study the interaction of Abeta monomers and to define the peptide sequences that may be important in nucleation-dependent aggregation.
Resumo:
Peptides of 5 and 8 residues encoded by the leaders of attenuation regulated chloramphenicol-resistance genes inhibit the peptidyltransferase of microorganisms from the three kingdoms. Therefore, the ribosomal target for the peptides is likely to be a conserved structure and/or sequence. The inhibitor peptides "footprint" to nucleotides of domain V in large subunit rRNA when peptide-ribosome complexes are probed with dimethyl sulfate. Accordingly, rRNA was examined as a candidate for the site of peptide binding. Inhibitor peptides MVKTD and MSTSKNAD were mixed with rRNA phenol-extracted from Escherichia coli ribosomes. The conformation of the RNA was then probed by limited digestion with nucleases that cleave at single-stranded (T1 endonuclease) and double-stranded (V1 endonuclease) sites. Both peptides selectively altered the susceptibility of domains IV and V of 23S rRNA to digestion by T1 endonuclease. Peptide effects on cleavage by V1 nuclease were observed only in domain V. The T1 nuclease susceptibility of domain V of in vitro-transcribed 23S rRNA was also altered by the peptides, demonstrating that peptide binding to the rRNA is independent of ribosomal protein. We propose the peptides MVKTD and MSTSKNAD perturb peptidyltransferase center catalytic activities by altering the conformation of domains IV and V of 23S rRNA. These findings provide a general mechanism through which nascent peptides may cis-regulate the catalytic activities of translating ribosomes.
Resumo:
O objetivo dessa pesquisa foi avaliar aspectos genéticos que relacionados à produção in vitro de embriões na raça Guzerá. O primeiro estudo focou na estimação de (co) variâncias genéticas e fenotípicas em características relacionadas a produção de embriões e na detecção de possível associação com a idade ao primeiro parto (AFC). Foi detectada baixa e média herdabilidade para características relacionadas à produção de oócitos e embriões. Houve fraca associação genética entre características ligadas a reprodução artificial e a idade ao primeiro parto. O segundo estudo avaliou tendências genéticas e de endogamia em uma população Guzerá no Brasil. Doadoras e embriões produzidos in vitro foram considerados como duas subpopulações de forma a realizar comparações acerca das diferenças de variação anual genética e do coeficiente de endogamia. A tendência anual do coeficiente de endogamia (F) foi superior para a população geral, sendo detectado efeito quadrático. No entanto, a média de F para a sub- população de embriões foi maior do que na população geral e das doadoras. Foi observado ganho genético anual superior para a idade ao primeiro parto e para a produção de leite (305 dias) entre embriões produzidos in vitro do que entre doadoras ou entre a população geral. O terceiro estudo examinou os efeitos do coeficiente de endogamia da doadora, do reprodutor (usado na fertilização in vitro) e dos embriões sobre resultados de produção in vitro de embriões na raça Guzerá. Foi detectado efeito da endogamia da doadora e dos embriões sobre as características estudadas. O quarto (e último) estudo foi elaborado para comparar a adequação de modelos mistos lineares e generalizados sob método de Máxima Verossimilhança Restrita (REML) e sua adequação a variáveis discretas. Quatro modelos hierárquicos assumindo diferentes distribuições para dados de contagem encontrados no banco. Inferência foi realizada com base em diagnósticos de resíduo e comparação de razões entre componentes de variância para os modelos em cada variável. Modelos Poisson superaram tanto o modelo linear (com e sem transformação da variável) quanto binomial negativo à qualidade do ajuste e capacidade preditiva, apesar de claras diferenças observadas na distribuição das variáveis. Entre os modelos testados, a pior qualidade de ajuste foi obtida para o modelo linear mediante transformação logarítmica (Log10 X +1) da variável resposta.
Resumo:
We introduce a model of a nonlinear double-barrier structure to describe in a simple way the effects of electron-electron scattering while remaining analytically tractable. The model is based on a generalized effective-mass equation where a nonlinear local field interaction is introduced to account for those inelastic scattering phenomena. Resonance peaks seen in the transmission coefficient spectra for the linear case appear shifted to higher energies depending on the magnitude of the nonlinear coupling. Our results are in good agreement with self-consistent solutions of the Schrodinger and Poisson equations. The calculation procedure is seen to be very fast, which makes our technique a good candidate for a rapid approximate analysis of these structures.
Resumo:
The present thesis is focused on the development of a thorough mathematical modelling and computational solution framework aimed at the numerical simulation of journal and sliding bearing systems operating under a wide range of lubrication regimes (mixed, elastohydrodynamic and full film lubrication regimes) and working conditions (static, quasi-static and transient conditions). The fluid flow effects have been considered in terms of the Isothermal Generalized Equation of the Mechanics of the Viscous Thin Films (Reynolds equation), along with the massconserving p-Ø Elrod-Adams cavitation model that accordingly ensures the so-called JFO complementary boundary conditions for fluid film rupture. The variation of the lubricant rheological properties due to the viscous-pressure (Barus and Roelands equations), viscous-shear-thinning (Eyring and Carreau-Yasuda equations) and density-pressure (Dowson-Higginson equation) relationships have also been taken into account in the overall modelling. Generic models have been derived for the aforementioned bearing components in order to enable their applications in general multibody dynamic systems (MDS), and by including the effects of angular misalignments, superficial geometric defects (form/waviness deviations, EHL deformations, etc.) and axial motion. The bearing exibility (conformal EHL) has been incorporated by means of FEM model reduction (or condensation) techniques. The macroscopic in fluence of the mixedlubrication phenomena have been included into the modelling by the stochastic Patir and Cheng average ow model and the Greenwood-Williamson/Greenwood-Tripp formulations for rough contacts. Furthermore, a deterministic mixed-lubrication model with inter-asperity cavitation has also been proposed for full-scale simulations in the microscopic (roughness) level. According to the extensive mathematical modelling background established, three significant contributions have been accomplished. Firstly, a general numerical solution for the Reynolds lubrication equation with the mass-conserving p - Ø cavitation model has been developed based on the hybridtype Element-Based Finite Volume Method (EbFVM). This new solution scheme allows solving lubrication problems with complex geometries to be discretized by unstructured grids. The numerical method was validated in agreement with several example cases from the literature, and further used in numerical experiments to explore its exibility in coping with irregular meshes for reducing the number of nodes required in the solution of textured sliding bearings. Secondly, novel robust partitioned techniques, namely: Fixed Point Gauss-Seidel Method (PGMF), Point Gauss-Seidel Method with Aitken Acceleration (PGMA) and Interface Quasi-Newton Method with Inverse Jacobian from Least-Squares approximation (IQN-ILS), commonly adopted for solving uid-structure interaction problems have been introduced in the context of tribological simulations, particularly for the coupled calculation of dynamic conformal EHL contacts. The performance of such partitioned methods was evaluated according to simulations of dynamically loaded connecting-rod big-end bearings of both heavy-duty and high-speed engines. Finally, the proposed deterministic mixed-lubrication modelling was applied to investigate the in fluence of the cylinder liner wear after a 100h dynamometer engine test on the hydrodynamic pressure generation and friction of Twin-Land Oil Control Rings.
Resumo:
As world communication, technology, and trade become increasingly integrated through globalization, multinational corporations seek employees with global leadership experience and skills. However, the demand for these skills currently outweighs the supply. Given the rarity of globally ready leaders, global competency development should be emphasized in higher education programs. The reality, however, is that university graduate programs are often outdated and focus mostly on cognitive learning. Global leadership competence requires moving beyond the cognitive domain of learning to create socially responsible and culturally connected global leaders. This requires attention to development methods; however, limited research in global leadership development methods has been conducted. A new conceptual model, the global leadership development ecosystem, was introduced in this study to guide the design and evaluation of global leadership development programs. It was based on three theories of learning and was divided into four development methodologies. This study quantitatively tested the model and used it as a framework for an in-depth examination of the design of one International MBA program. The program was first benchmarked, by means of a qualitative best practices analysis, against the top-ranking IMBA programs in the world. Qualitative data from students, faculty, administrators, and staff was then examined, using descriptive and focused data coding. Quantitative data analysis, using PASW Statistics software, and a hierarchical regression, showed the individual effect of each of the four development methods, as well as their combined effect, on student scores on a global leadership assessment. The analysis revealed that each methodology played a distinct and important role in developing different competencies of global leadership. It also confirmed the critical link between self-efficacy and global leadership development.
Resumo:
Mathematical programming can be used for the optimal design of shell-and-tube heat exchangers (STHEs). This paper proposes a mixed integer non-linear programming (MINLP) model for the design of STHEs, following rigorously the standards of the Tubular Exchanger Manufacturers Association (TEMA). Bell–Delaware Method is used for the shell-side calculations. This approach produces a large and non-convex model that cannot be solved to global optimality with the current state of the art solvers. Notwithstanding, it is proposed to perform a sequential optimization approach of partial objective targets through the division of the problem into sets of related equations that are easier to solve. For each one of these problems a heuristic objective function is selected based on the physical behavior of the problem. The global optimal solution of the original problem cannot be ensured even in the case in which each of the sub-problems is solved to global optimality, but at least a very good solution is always guaranteed. Three cases extracted from the literature were studied. The results showed that in all cases the values obtained using the proposed MINLP model containing multiple objective functions improved the values presented in the literature.
Resumo:
This paper introduces a new mathematical model for the simultaneous synthesis of heat exchanger networks (HENs), wherein the handling pressure of process streams is used to enhance the heat integration. The proposed approach combines generalized disjunctive programming (GDP) and mixed-integer nonlinear programming (MINLP) formulation, in order to minimize the total annualized cost composed by operational and capital expenses. A multi-stage superstructure is developed for the HEN synthesis, assuming constant heat capacity flow rates and isothermal mixing, and allowing for streams splits. In this model, the pressure and temperature of streams must be treated as optimization variables, increasing further the complexity and difficulty to solve the problem. In addition, the model allows for coupling of compressors and turbines to save energy. A case study is performed to verify the accuracy of the proposed model. In this example, the optimal integration between the heat and work decreases the need for thermal utilities in the HEN design. As a result, the total annualized cost is also reduced due to the decrease in the operational expenses related to the heating and cooling of the streams.
Resumo:
Introduction The strong expansion of the world production of plastics caused a severe accumulation of plastic debris in the environment, which makes them one of the most important contaminants, growing as a global environmental problem. Although the production in Europe has been relatively constant in the last 10 years, world plastic production continues to increase, affecting soil biota and their functions. Objectives Thus, in order to evaluate the effects of MP in soil-dwelling organisms, earthworms (Eisenia andrei Bouché), were exposed to standard artificial soil mixed with MPs and the authors documented, using microscopic figures, the pathological lesions found in this biological model. Material and Methods Eight adult earthworms extracted from soils contaminated with different concentrations of MP (mg/kgdw) with sizes ranging between 250-1000 m, were fixed in 10% neutral-buffered formalin and processed for routine histopathological diagnosis. Results and discussion Contrary to what would be expected, MP were not found throughout the GI tube of earthworms but several lesions were found in the individuals extracted from the soils with high MP concentrations, when compared with control group, namely epithelial intestinal atrophy and evidences of inflammatory responses to this stress agent. Conclusion Earthworms have probably avoided the consumption of the biggest MPs. However, evidences point for lesions that were likely caused by the smallest MPs that were likely egested during the depuration phase.