907 resultados para Mixed model under selection
Resumo:
Nitrogen sputtering yields as high as 104 atoms/ion, are obtained by irradiating N-rich-Cu3N films (N concentration: 33 ± 2 at.%) with Cu ions at energies in the range 10?42 MeV. The kinetics of N sputtering as a function of ion fluence is determined at several energies (stopping powers) for films deposited on both, glass and silicon substrates. The kinetic curves show that the amount of nitrogen release strongly increases with rising irradiation fluence up to reaching a saturation level at a low remaining nitrogen fraction (5?10%), in which no further nitrogen reduction is observed. The sputtering rate for nitrogen depletion is found to be independent of the substrate and to linearly increase with electronic stopping power (Se). A stopping power (Sth) threshold of ?3.5 keV/nm for nitrogen depletion has been estimated from extrapolation of the data. Experimental kinetic data have been analyzed within a bulk molecular recombination model. The microscopic mechanisms of the nitrogen depletion process are discussed in terms of a non-radiative exciton decay model. In particular, the estimated threshold is related to a minimum exciton density which is required to achieve efficient sputtering rates.
Resumo:
En los últimos años la externalización de TI ha ganado mucha importancia en el mercado y, por ejemplo, el mercado externalización de servicios de TI sigue creciendo cada año. Ahora más que nunca, las organizaciones son cada vez más los compradores de las capacidades necesarias mediante la obtención de productos y servicios de los proveedores, desarrollando cada vez menos estas capacidades dentro de la empresa. La selección de proveedores de TI es un problema de decisión complejo. Los gerentes que enfrentan una decisión sobre la selección de proveedores de TI tienen dificultades en la elaboración de lo que hay que pensar, además en sus discursos. También de acuerdo con un estudio del SEI (Software Engineering Institute) [40], del 20 al 25 por ciento de los grandes proyectos de adquisición de TI fracasan en dos años y el 50 por ciento fracasan dentro de cinco años. La mala gestión, la mala definición de requisitos, la falta de evaluaciones exhaustivas, que pueden ser utilizadas para llegar a los mejores candidatos para la contratación externa, la selección de proveedores y los procesos de contratación inadecuados, la insuficiencia de procedimientos de selección tecnológicos, y los cambios de requisitos no controlados son factores que contribuyen al fracaso del proyecto. La mayoría de los fracasos podrían evitarse si el cliente aprendiese a comprender los problemas de decisión, hacer un mejor análisis de decisiones, y el buen juicio. El objetivo principal de este trabajo es el desarrollo de un modelo de decisión para la selección de proveedores de TI que tratará de reducir la cantidad de fracasos observados en las relaciones entre el cliente y el proveedor. La mayor parte de estos fracasos son causados por una mala selección, por parte del cliente, del proveedor. Además de estos problemas mostrados anteriormente, la motivación para crear este trabajo es la inexistencia de cualquier modelo de decisión basado en un multi modelo (mezcla de modelos adquisición y métodos de decisión) para el problema de la selección de proveedores de TI. En el caso de estudio, nueve empresas españolas fueron analizadas de acuerdo con el modelo de decisión para la selección de proveedores de TI desarrollado en este trabajo. Dos softwares se utilizaron en este estudio de caso: Expert Choice, y D-Sight. ABSTRACT In the past few years IT outsourcing has gained a lot of importance in the market and, for example, the IT services outsourcing market is still growing every year. Now more than ever, organizations are increasingly becoming acquirers of needed capabilities by obtaining products and services from suppliers and developing less and less of these capabilities in-house. IT supplier selection is a complex and opaque decision problem. Managers facing a decision about IT supplier selection have difficulty in framing what needs to be thought about further in their discourses. Also according to a study from SEI (Software Engineering Institute) [40], 20 to 25 percent of large information technology (IT) acquisition projects fail within two years and 50 percent fail within five years. Mismanagement, poor requirements definition, lack of comprehensive evaluations, which can be used to come up with the best candidates for outsourcing, inadequate supplier selection and contracting processes, insufficient technology selection procedures, and uncontrolled requirements changes are factors that contribute to project failure. The majority of project failures could be avoided if the acquirer learns how to understand the decision problems, make better decision analysis, and good judgment. The main objective of this work is the development of a decision model for IT supplier selection that will try to decrease the amount of failures seen in the relationships between the client-supplier. Most of these failures are caused by a not well selection of the supplier. Besides these problems showed above, the motivation to create this work is the inexistence of any decision model based on multi model (mixture of acquisition models and decision methods) for the problem of IT supplier selection. In the case study, nine different Spanish companies were analyzed based on the IT supplier selection decision model developed in this work. Two software products were used in this case study, Expert Choice and D-Sight.
Resumo:
Satellites and space equipment are exposed to diffuse acoustic fields during the launch process. The use of adequate techniques to model the response to the acoustic loads is a fundamental task during the design and verification phases. Considering the modal density of each element is necessary to identify the correct methodology. In this report selection criteria are presented in order to choose the correct modelling technique depending on the frequency ranges. A model satellite’s response to acoustic loads is presented, determining the modal densities of each component in different frequency ranges. The paper proposes to select the mathematical method in each modal density range and the differences in the response estimation due to the different used techniques. In addition, the methodologies to analyse the intermediate range of the system are discussed. The results are compared with experimental testing data obtained in an experimental modal test.
Resumo:
Following the Integrated Water Resources Management approach, the European Water Framework Directive demands Member States to develop water management plans at the catchment level. Those plans have to integrate the different interests and must be developed with stakeholder participation. To face these requirements, managers need tools to assess the impacts of possible management alternatives on natural and socio-economic systems. These tools should ideally be able to address the complexity and uncertainties of the water system, while serving as a platform for stakeholder participation. The objective of our research was to develop a participatory integrated assessment model, based on the combination of a crop model, an economic model and a participatory Bayesian network, with an application in the middle Guadiana sub-basin, in Spain. The methodology is intended to capture the complexity of water management problems, incorporating the relevant sectors, as well as the relevant scales involved in water management decision making. The integrated model has allowed us testing different management, market and climate change scenarios and assessing the impacts of such scenarios on the natural system (crops), on the socio-economic system (farms) and on the environment (water resources). Finally, this integrated assessment modelling process has allowed stakeholder participation, complying with the main requirements of current European water laws.
Resumo:
Sandwich panels of laminated gypsum and rock wool have shown large pathology of cracking due to excessive slabs deflection. Currently the most widespread use of this material is as vertical elements of division or partition, with no structural function, what justifies that there are no studies on the mechanism of fracture and mechanical properties related to it. Therefore, and in order to reduce the cracking problem, it is necessary to progress in the simulation and prediction of the behaviour under tensile and shear load of such panels, although in typical applications have no structural responsability.
Resumo:
This article presents a new material model developed with the aim of analyzing failure of blunt notched components made of nonlinear brittle materials. The model, which combines the cohesive crack model with Hencky's theory of total deformations, is used to simulate an experimental benchmark carried out previously by the authors. Such combination is achieved through the embedded crack approach concept. In spite of the unavailability of precise material data, the numerical predictions obtained show good agreement with the experimental results.
Resumo:
In this paper, a mathematical programming model and a heuristically derived solution is described to assist with the efficient planning of services for a set of auxiliary bus lines (a bus-bridging system) during disruptions of metro and rapid transit lines. The model can be considered static and takes into account the average flows of passengers over a given period of time (i.e., the peak morning traffic hour) Auxiliary bus services must accommodate very high demand levels, and the model presented is able to take into account the operation of a bus-bridging system under congested conditions. A general analysis of the congestion in public transportation lines is presented, and the results are applied to the design of a bus-bridging system. A nonlinear integer mathematical programming model and a suitable approximation of this model are then formulated. This approximated model can be solved by a heuristic procedure that has been shown to be computationally viable. The output of the model is as follows: (a) the number of bus units to assign to each of the candidate lines of the bus-bridging system; (b) the routes to be followed by users passengers of each of the origin–destination pairs; (c) the operational conditions of the components of the bus-bridging system, including the passenger load of each of the line segments, the degree of saturation of the bus stops relative to their bus input flows, the bus service times at bus stops and the passenger waiting times at bus stops. The model is able to take into account bounds with regard to the maximum number of passengers waiting at bus stops and the space available at bus stops for the queueing of bus units. This paper demonstrates the applicability of the model with two realistic test cases: a railway corridor in Madrid and a metro line in Barcelona Planificación de los servicios de lineas auxiliares de autobuses durante las incidencias de las redes de metro y cercanías. El modelo estudia el problema bajo condiciones de alta demanda y condiciones de congestión. El modelo no lineal resultante es resuelto mediante heurísticas que demuestran su utilidad. Se demuestran los resultados en dos corredores de las ciudades de Barcelona y Madrid.
Resumo:
Road accidents are a very relevant issue in many countries and macroeconomic models are very frequently applied by academia and administrations to reduce their frequency and consequences. The selection of explanatory variables and response transformation parameter within the Bayesian framework for the selection of the set of explanatory variables a TIM and 3IM (two input and three input models) procedures are proposed. The procedure also uses the DIC and pseudo -R2 goodness of fit criteria. The model to which the methodology is applied is a dynamic regression model with Box-Cox transformation (BCT) for the explanatory variables and autorgressive (AR) structure for the response. The initial set of 22 explanatory variables are identified. The effects of these factors on the fatal accident frequency in Spain, during 2000-2012, are estimated. The dependent variable is constructed considering the stochastic trend component.
Resumo:
In this study we are proposing a Bayesian model selection methodology, where the best model from the list of candidate structural explanatory models is selected. The model structure is based on the Zellner's (1971)explanatory model with autoregressive errors. For the selection technique we are using a parsimonious model, where the model variables are transformed using Box and Cox (1964) class of transformations.
Resumo:
habilidades de comprensión y resolución de problemas. Tanto es así que se puede afirmar con rotundidad que no existe el método perfecto para cada una de las etapas de desarrollo y tampoco existe el modelo de ciclo de vida perfecto: cada nuevo problema que se plantea es diferente a los anteriores en algún aspecto y esto hace que técnicas que funcionaron en proyectos anteriores fracasen en los proyectos nuevos. Por ello actualmente se realiza un planteamiento integrador que pretende utilizar en cada caso las técnicas, métodos y herramientas más acordes con las características del problema planteado al ingeniero. Bajo este punto de vista se plantean nuevos problemas. En primer lugar está la selección de enfoques de desarrollo. Si no existe el mejor enfoque, ¿cómo se hace para elegir el más adecuado de entre el conjunto de los existentes? Un segundo problema estriba en la relación entre las etapas de análisis y diseño. En este sentido existen dos grandes riesgos. Por un lado, se puede hacer un análisis del problema demasiado superficial, con lo que se produce una excesiva distancia entre el análisis y el diseño que muchas veces imposibilita el paso de uno a otro. Por otro lado, se puede optar por un análisis en términos del diseño que provoca que no cumpla su objetivo de centrarse en el problema, sino que se convierte en una primera versión de la solución, lo que se conoce como diseño preliminar. Como consecuencia de lo anterior surge el dilema del análisis, que puede plantearse como sigue: para cada problema planteado hay que elegir las técnicas más adecuadas, lo que requiere que se conozcan las características del problema. Para ello, a su vez, se debe analizar el problema, eligiendo una técnica antes de conocerlo. Si la técnica utiliza términos de diseño entonces se ha precondicionado el paradigma de solución y es posible que no sea el más adecuado para resolver el problema. En último lugar están las barreras pragmáticas que frenan la expansión del uso de métodos con base formal, dificultando su aplicación en la práctica cotidiana. Teniendo en cuenta todos los problemas planteados, se requieren métodos de análisis del problema que cumplan una serie de objetivos, el primero de los cuales es la necesidad de una base formal, con el fin de evitar la ambigüedad y permitir verificar la corrección de los modelos generados. Un segundo objetivo es la independencia de diseño: se deben utilizar términos que no tengan reflejo directo en el diseño, para que permitan centrarse en las características del problema. Además los métodos deben permitir analizar problemas de cualquier tipo: algorítmicos, de soporte a la decisión o basados en el conocimiento, entre otros. En siguiente lugar están los objetivos relacionados con aspectos pragmáticos. Por un lado deben incorporar una notación textual formal pero no matemática, de forma que se facilite su validación y comprensión por personas sin conocimientos matemáticos profundos pero al mismo tiempo sea lo suficientemente rigurosa para facilitar su verificación. Por otro lado, se requiere una notación gráfica complementaria para representar los modelos, de forma que puedan ser comprendidos y validados cómodamente por parte de los clientes y usuarios. Esta tesis doctoral presenta SETCM, un método de análisis que cumple estos objetivos. Para ello se han definido todos los elementos que forman los modelos de análisis usando una terminología independiente de paradigmas de diseño y se han formalizado dichas definiciones usando los elementos fundamentales de la teoría de conjuntos: elementos, conjuntos y relaciones entre conjuntos. Por otro lado se ha definido un lenguaje formal para representar los elementos de los modelos de análisis – evitando en lo posible el uso de notaciones matemáticas – complementado con una notación gráfica que permite representar de forma visual las partes más relevantes de los modelos. El método propuesto ha sido sometido a una intensa fase de experimentación, durante la que fue aplicado a 13 casos de estudio, todos ellos proyectos reales que han concluido en productos transferidos a entidades públicas o privadas. Durante la experimentación se ha evaluado la adecuación de SETCM para el análisis de problemas de distinto tamaño y en sistemas cuyo diseño final usaba paradigmas diferentes e incluso paradigmas mixtos. También se ha evaluado su uso por analistas con distinto nivel de experiencia – noveles, intermedios o expertos – analizando en todos los casos la curva de aprendizaje, con el fin de averiguar si es fácil de aprender su uso, independientemente de si se conoce o no alguna otra técnica de análisis. Por otro lado se ha estudiado la capacidad de ampliación de modelos generados con SETCM, para comprobar si permite abordar proyectos realizados en varias fases, en los que el análisis de una fase consista en ampliar el análisis de la fase anterior. En resumidas cuentas, se ha tratado de evaluar la capacidad de integración de SETCM en una organización como la técnica de análisis preferida para el desarrollo de software. Los resultados obtenidos tras esta experimentación han sido muy positivos, habiéndose alcanzado un alto grado de cumplimiento de todos los objetivos planteados al definir el método.---ABSTRACT---Software development is an inherently complex activity, which requires specific abilities of problem comprehension and solving. It is so difficult that it can even be said that there is no perfect method for each of the development stages and that there is no perfect life cycle model: each new problem is different to the precedent ones in some respect and the techniques that worked in other problems can fail in the new ones. Given that situation, the current trend is to integrate different methods, tools and techniques, using the best suited for each situation. This trend, however, raises some new problems. The first one is the selection of development approaches. If there is no a manifestly single best approach, how does one go about choosing an approach from the array of available options? The second problem has to do with the relationship between the analysis and design phases. This relation can lead to two major risks. On one hand, the analysis could be too shallow and far away from the design, making it very difficult to perform the transition between them. On the other hand, the analysis could be expressed using design terminology, thus becoming more a kind of preliminary design than a model of the problem to be solved. In third place there is the analysis dilemma, which can be expressed as follows. The developer has to choose the most adequate techniques for each problem, and to make this decision it is necessary to know the most relevant properties of the problem. This implies that the developer has to analyse the problem, choosing an analysis method before really knowing the problem. If the chosen technique uses design terminology then the solution paradigm has been preconditioned and it is possible that, once the problem is well known, that paradigm wouldn’t be the chosen one. The last problem consists of some pragmatic barriers that limit the applicability of formal based methods, making it difficult to use them in current practice. In order to solve these problems there is a need for analysis methods that fulfil several goals. The first one is the need of a formal base, which prevents ambiguity and allows the verification of the analysis models. The second goal is design-independence: the analysis should use a terminology different from the design, to facilitate a real comprehension of the problem under study. In third place the analysis method should allow the developer to study different kinds of problems: algorithmic, decision-support, knowledge based, etc. Next there are two goals related to pragmatic aspects. Firstly, the methods should have a non mathematical but formal textual notation. This notation will allow people without deep mathematical knowledge to understand and validate the resulting models, without losing the needed rigour for verification. Secondly, the methods should have a complementary graphical notation to make more natural the understanding and validation of the relevant parts of the analysis. This Thesis proposes such a method, called SETCM. The elements conforming the analysis models have been defined using a terminology that is independent from design paradigms. Those terms have been then formalised using the main concepts of the set theory: elements, sets and correspondences between sets. In addition, a formal language has been created, which avoids the use of mathematical notations. Finally, a graphical notation has been defined, which can visually represent the most relevant elements of the models. The proposed method has been thoroughly tested during the experimentation phase. It has been used to perform the analysis of 13 actual projects, all of them resulting in transferred products. This experimentation allowed evaluating the adequacy of SETCM for the analysis of problems of varying size, whose final design used different paradigms and even mixed ones. The use of the method by people with different levels of expertise was also evaluated, along with the corresponding learning curve, in order to assess if the method is easy to learn, independently of previous knowledge on other analysis techniques. In addition, the expandability of the analysis models was evaluated, assessing if the technique was adequate for projects organised in incremental steps, in which the analysis of one step grows from the precedent models. The final goal was to assess if SETCM can be used inside an organisation as the preferred analysis method for software development. The obtained results have been very positive, as SETCM has obtained a high degree of fulfilment of the goals stated for the method.
Resumo:
Since the memristor was first built in 2008 at HP Labs, no end of devices and models have been presented. Also, new applications appear frequently. However, the integration of the device at the circuit level is not straightforward, because available models are still immature and/or suppose high computational loads, making their simulation long and cumbersome. This study assists circuit/systems designers in the integration of memristors in their applications, while aiding model developers in the validation of their proposals. We introduce the use of a memristor application framework to support the work of both the model developer and the circuit designer. First, the framework includes a library with the best-known memristor models, being easily extensible with upcoming models. Systematic modifications have been applied to these models to provide better convergence and significant simulations speedups. Second, a quick device simulator allows the study of the response of the models under different scenarios, helping the designer with the stimuli and operation time selection. Third, fine tuning of the device including parameters variations and threshold determination is also supported. Finally, SPICE/Spectre subcircuit generation is provided to ease the integration of the devices in application circuits. The framework provides the designer with total control overconvergence, computational load, and the evolution of system variables, overcoming usual problems in the integration of memristive devices.
Resumo:
This PhD dissertation is framed in the emergent fields of Reverse Logistics and ClosedLoop Supply Chain (CLSC) management. This subarea of supply chain management has gained researchers and practitioners' attention over the last 15 years to become a fully recognized subdiscipline of the Operations Management field. More specifically, among all the activities that are included within the CLSC area, the focus of this dissertation is centered in direct reuse aspects. The main contribution of this dissertation to current knowledge is twofold. First, a framework for the so-called reuse CLSC is developed. This conceptual model is grounded in a set of six case studies conducted by the author in real industrial settings. The model has also been contrasted with existing literature and with academic and professional experts on the topic as well. The framework encompasses four building blocks. In the first block, a typology for reusable articles is put forward, distinguishing between Returnable Transport Items (RTI), Reusable Packaging Materials (RPM), and Reusable Products (RP). In the second block, the common characteristics that render reuse CLSC difficult to manage from a logistical standpoint are identified, namely: fleet shrinkage, significant investment and limited visibility. In the third block, the main problems arising in the management of reuse CLSC are analyzed, such as: (1) define fleet size dimension, (2) control cycle time and promote articles rotation, (3) control return rate and prevent shrinkage, (4) define purchase policies for new articles, (5) plan and control reconditioning activities, and (6) balance inventory between depots. Finally, in the fourth block some solutions to those issues are developed. Firstly, problems (2) and (3) are addressed through the comparative analysis of alternative strategies for controlling cycle time and return rate. Secondly, a methodology for calculating the required fleet size is elaborated (problem (1)). This methodology is valid for different configurations of the physical flows in the reuse CLSC. Likewise, some directions are pointed out for further development of a similar method for defining purchase policies for new articles (problem (4)). The second main contribution of this dissertation is embedded in the solutions part (block 4) of the conceptual framework and comprises a two-level decision problem integrating two mixed integer linear programming (MILP) models that have been formulated and solved to optimality using AIMMS as modeling language, CPLEX as solver and Excel spreadsheet for data introduction and output presentation. The results obtained are analyzed in order to measure in a client-supplier system the economic impact of two alternative control strategies (recovery policies) in the context of reuse. In addition, the models support decision-making regarding the selection of the appropriate recovery policy against the characteristics of demand pattern and the structure of the relevant costs in the system. The triangulation of methods used in this thesis has enabled to address the same research topic with different approaches and thus, the robustness of the results obtained is strengthened.
Resumo:
Two experiments were conducted to estimate the standardized ileal digestible (SID) Trp:Lys ratio requirement for growth performance of nursery pigs. Experimental diets were formulated to ensure that lysine was the second limiting AA throughout the experiments. In Exp. 1 (6 to 10 kg BW), 255 nursery pigs (PIC 327 × 1050, initially 6.3 ± 0.15 kg, mean ± SD) arranged in pens of 6 or 7 pigs were blocked by pen weight and assigned to experimental diets (7 pens/diet) consisting of SID Trp:Lys ratios of 14.7%, 16.5%, 18.4%, 20.3%, 22.1%, and 24.0% for 14 d with 1.30% SID Lys. In Exp. 2 (11 to 20 kg BW), 1,088 pigs (PIC 337 × 1050, initially 11.2 kg ± 1.35 BW, mean ± SD) arranged in pens of 24 to 27 pigs were blocked by average pig weight and assigned to experimental diets (6 pens/diet) consisting of SID Trp:Lys ratios of 14.5%, 16.5%, 18.0%, 19.5%, 21.0%, 22.5%, and 24.5% for 21 d with 30% dried distillers grains with solubles and 0.97% SID Lys. Each experiment was analyzed using general linear mixed models with heterogeneous residual variances. Competing heteroskedastic models included broken-line linear (BLL), broken-line quadratic (BLQ), and quadratic polynomial (QP). For each response, the best-fitting model was selected using Bayesian information criterion. In Exp. 1 (6 to 10 kg BW), increasing SID Trp:Lys ratio linearly increased (P < 0.05) ADG and G:F. For ADG, the best-fitting model was a QP in which the maximum ADG was estimated at 23.9% (95% confidence interval [CI]: [<14.7%, >24.0%]) SID Trp:Lys ratio. For G:F, the best-fitting model was a BLL in which the maximum G:F was estimated at 20.4% (95% CI: [14.3%, 26.5%]) SID Trp:Lys. In Exp. 2 (11 to 20 kg BW), increasing SID Trp:Lys ratio increased (P < 0.05) ADG and G:F in a quadratic manner. For ADG, the best-fitting model was a QP in which the maximum ADG was estimated at 21.2% (95% CI: [20.5%, 21.9%]) SID Trp:Lys. For G:F, BLL and BLQ models had comparable fit and estimated SID Trp:Lys requirements at 16.6% (95% CI: [16.0%, 17.3%]) and 17.1% (95% CI: [16.6%, 17.7%]), respectively. In conclusion, the estimated SID Trp:Lys requirement in Exp. 1 ranged from 20.4% for maximum G:F to 23.9% for maximum ADG, whereas in Exp. 2 it ranged from 16.6% for maximum G:F to 21.2% for maximum ADG. These results suggest that standard NRC (2012) recommendations may underestimate the SID Trp:Lys requirement for nursery pigs from 11 to 20 kg BW.
Resumo:
We define a capacity reserve model to dimension passenger car service installations according to the demographic distribution of the area to be serviced by using hospital?s emergency room analogies. Usually, service facilities are designed applying empirical methods, but customers arrive under uncertain conditions not included in the original estimations, and there is a gap between customer?s real demand and the service?s capacity. Our research establishes a valid methodology and covers the absence of recent researches and the lack of statistical techniques implementation, integrating demand uncertainty in a unique model built in stages by implementing ARIMA forecasting, queuing theory, and Monte Carlo simulation to optimize the service capacity and occupancy, minimizing the implicit cost of the capacity that must be reserved to service unexpected customers. Our model has proved to be a useful tool for optimal decision making under uncertainty integrating the prediction of the cost implicit in the reserve capacity to serve unexpected demand and defining a set of new process indicators, such us capacity, occupancy, and cost of capacity reserve never studied before. The new indicators are intended to optimize the service operation. This set of new indicators could be implemented in the information systems used in the passenger car services.