938 resultados para Product-specific model
Resumo:
Computational fluid dynamic (CFD) studies of blood flow in cerebrovascular aneurysms have potential to improve patient treatment planning by enabling clinicians and engineers to model patient-specific geometries and compute predictors and risks prior to neurovascular intervention. However, the use of patient-specific computational models in clinical settings is unfeasible due to their complexity, computationally intensive and time-consuming nature. An important factor contributing to this challenge is the choice of outlet boundary conditions, which often involves a trade-off between physiological accuracy, patient-specificity, simplicity and speed. In this study, we analyze how resistance and impedance outlet boundary conditions affect blood flow velocities, wall shear stresses and pressure distributions in a patient-specific model of a cerebrovascular aneurysm. We also use geometrical manipulation techniques to obtain a model of the patient’s vasculature prior to aneurysm development, and study how forces and stresses may have been involved in the initiation of aneurysm growth. Our CFD results show that the nature of the prescribed outlet boundary conditions is not as important as the relative distributions of blood flow through each outlet branch. As long as the appropriate parameters are chosen to keep these flow distributions consistent with physiology, resistance boundary conditions, which are simpler, easier to use and more practical than their impedance counterparts, are sufficient to study aneurysm pathophysiology, since they predict very similar wall shear stresses, time-averaged wall shear stresses, time-averaged pressures, and blood flow patterns and velocities. The only situations where the use of impedance boundary conditions should be prioritized is if pressure waveforms are being analyzed, or if local pressure distributions are being evaluated at specific time points, especially at peak systole, where the use of resistance boundary conditions leads to unnaturally large pressure pulses. In addition, we show that in this specific patient, the region of the blood vessel where the neck of the aneurysm developed was subject to abnormally high wall shear stresses, and that regions surrounding blebs on the aneurysmal surface were subject to low, oscillatory wall shear stresses. Computational models using resistance outlet boundary conditions may be suitable to study patient-specific aneurysm progression in a clinical setting, although several other challenges must be addressed before these tools can be applied clinically.
Resumo:
Purpose of this paper:
Recent literature indicates that around one third of perishable products finish as waste (Mena et al., 2014): 60% of this waste can be classified as avoidable (EC, 2010) suggesting logistics and operational inefficiencies along the supply chain. In developed countries perishable products are predominantly wasted in wholesale and retail (Gustavsson et al., 2011) due to customer demand uncertainty the errors and delays in the supply chain (Fernie and Sparks, 2014). While research on logistics of large retail supply chains is well documented, research on retail small and medium enterprises’ (SMEs) capabilities to prevent and manage waste of perishable products is in its infancy (c.f. Ellegaard, 2008) and needs further exploration. In our study, we investigate the retail logistics practice of small food retailers, the factors that contribute to perishable products waste and the barriers and opportunities of SMEs in retail logistics to preserve product quality and participate in reverse logistics flows.
Design/methodology/approach:
As research on waste of perishable products for SMEs is scattered, we focus on identifying key variables that contribute to the creation of avoidable waste. Secondly we identify patterns of waste creation at the retail level and its possibilities for value added recovery. We use explorative case studies (Eisenhardt, 1989) and compare four SMEs and one large retailer that operate in a developed market. To get insights into specificities of SMEs that affect retail logistics practice, we select two types of food retailers: specialised (e.g. greengrocers and bakers) and general (e.g. convenience store that sells perishable products as a part of the assortment)
Findings:
Our preliminary findings indicate that there is a difference between large retailers and SME retailers in factors that contribute to the waste creation, as well as opportunities for value added recovery of products. While more factors appear to affect waste creation and management at large retailers, a small number of specific factors appears to affect SMEs. Similarly, large retailers utilise a range of practices to reduce risks of product perishability and short shelf life, manage demand, and manage reverse logistics practices. Retail SMEs on the other hand have limited options to address waste creation and value added recovery. However, our findings show that specialist SMEs could successfully minimize waste and even create possibilities for value added recovery of perishable products. Data indicates that business orientation of the SME, the buyersupplier relationship, and an extent of adoption of lean principles in retail coupled with SME resources, product specific regulations and support from local authorities for waste management or partnerships with other organizations determine extent of successful preservation of a product quality and value added recovery.
Value:
Our contribution to the SCM academic literature is threefold: first, we identify major factors that contribute to the generation waste of perishable products in retail environment; second, we identify possibilities for value added recovery for perishable products and third, we present opportunities and challenges for SME retailers to manage or participate in activities of value added recovery. Our findings contribute to theory by filling a gap in the literature that considers product quality preservation and value added recovery in the context of retail logistics and SMEs.
Research limitations/implications:
Our findings are limited to insights from five case studies of retail companies that operate within a developed market. To improve on generalisability, we intend to increase the number of cases and include data obtained from the suppliers and organizations involved in reverse logistics flows (e.g. local authorities, charities, etc.).
Practical implications:
With this paper, we contribute to the improvement of retail logistics and operations in SMEs which constitute over 99% of business activities in UK (Rhodes, 2015). Our findings will help retail managers and owners to better understand the possibilities for value added recovery, investigate a range of logistics and retail strategies suitable for the specificities of SME environment and, ultimately, improve their profitability and sustainability.
Resumo:
Phase change problems arise in many practical applications such as air-conditioning and refrigeration, thermal energy storage systems and thermal management of electronic devices. The physical phenomenon in such applications are complex and are often difficult to be studied in detail with the help of only experimental techniques. The efforts to improve computational techniques for analyzing two-phase flow problems with phase change are therefore gaining momentum. The development of numerical methods for multiphase flow has been motivated generally by the need to account more accurately for (a) large topological changes such as phase breakup and merging, (b) sharp representation of the interface and its discontinuous properties and (c) accurate and mass conserving motion of the interface. In addition to these considerations, numerical simulation of multiphase flow with phase change introduces additional challenges related to discontinuities in the velocity and the temperature fields. Moreover, the velocity field is no longer divergence free. For phase change problems, the focus of developmental efforts has thus been on numerically attaining a proper conservation of energy across the interface in addition to the accurate treatment of fluxes of mass and momentum conservation as well as the associated interface advection. Among the initial efforts related to the simulation of bubble growth in film boiling applications the work in \cite{Welch1995} was based on the interface tracking method using a moving unstructured mesh. That study considered moderate interfacial deformations. A similar problem was subsequently studied using moving, boundary fitted grids \cite{Son1997}, again for regimes of relatively small topological changes. A hybrid interface tracking method with a moving interface grid overlapping a static Eulerian grid was developed \cite{Juric1998} for the computation of a range of phase change problems including, three-dimensional film boiling \cite{esmaeeli2004computations}, multimode two-dimensional pool boiling \cite{Esmaeeli2004} and film boiling on horizontal cylinders \cite{Esmaeeli2004a}. The handling of interface merging and pinch off however remains a challenge with methods that explicitly track the interface. As large topological changes are crucial for phase change problems, attention has turned in recent years to front capturing methods utilizing implicit interfaces that are more effective in treating complex interface deformations. The VOF (Volume of Fluid) method was adopted in \cite{Welch2000} to simulate the one-dimensional Stefan problem and the two-dimensional film boiling problem. The approach employed a specific model for mass transfer across the interface involving a mass source term within cells containing the interface. This VOF based approach was further coupled with the level set method in \cite{Son1998}, employing a smeared-out Heaviside function to avoid the numerical instability related to the source term. The coupled level set, volume of fluid method and the diffused interface approach was used for film boiling with water and R134a at the near critical pressure condition \cite{Tomar2005}. The effect of superheat and saturation pressure on the frequency of bubble formation were analyzed with this approach. The work in \cite{Gibou2007} used the ghost fluid and the level set methods for phase change simulations. A similar approach was adopted in \cite{Son2008} to study various boiling problems including three-dimensional film boiling on a horizontal cylinder, nucleate boiling in microcavity \cite{lee2010numerical} and flow boiling in a finned microchannel \cite{lee2012direct}. The work in \cite{tanguy2007level} also used the ghost fluid method and proposed an improved algorithm based on enforcing continuity and divergence-free condition for the extended velocity field. The work in \cite{sato2013sharp} employed a multiphase model based on volume fraction with interface sharpening scheme and derived a phase change model based on local interface area and mass flux. Among the front capturing methods, sharp interface methods have been found to be particularly effective both for implementing sharp jumps and for resolving the interfacial velocity field. However, sharp velocity jumps render the solution susceptible to erroneous oscillations in pressure and also lead to spurious interface velocities. To implement phase change, the work in \cite{Hardt2008} employed point mass source terms derived from a physical basis for the evaporating mass flux. To avoid numerical instability, the authors smeared the mass source by solving a pseudo time-step diffusion equation. This measure however led to mass conservation issues due to non-symmetric integration over the distributed mass source region. The problem of spurious pressure oscillations related to point mass sources was also investigated by \cite{Schlottke2008}. Although their method is based on the VOF, the large pressure peaks associated with sharp mass source was observed to be similar to that for the interface tracking method. Such spurious fluctuation in pressure are essentially undesirable because the effect is globally transmitted in incompressible flow. Hence, the pressure field formation due to phase change need to be implemented with greater accuracy than is reported in current literature. The accuracy of interface advection in the presence of interfacial mass flux (mass flux conservation) has been discussed in \cite{tanguy2007level,tanguy2014benchmarks}. The authors found that the method of extending one phase velocity to entire domain suggested by Nguyen et al. in \cite{nguyen2001boundary} suffers from a lack of mass flux conservation when the density difference is high. To improve the solution, the authors impose a divergence-free condition for the extended velocity field by solving a constant coefficient Poisson equation. The approach has shown good results with enclosed bubble or droplet but is not general for more complex flow and requires additional solution of the linear system of equations. In current thesis, an improved approach that addresses both the numerical oscillation of pressure and the spurious interface velocity field is presented by featuring (i) continuous velocity and density fields within a thin interfacial region and (ii) temporal velocity correction steps to avoid unphysical pressure source term. Also I propose a general (iii) mass flux projection correction for improved mass flux conservation. The pressure and the temperature gradient jump condition are treated sharply. A series of one-dimensional and two-dimensional problems are solved to verify the performance of the new algorithm. Two-dimensional and cylindrical film boiling problems are also demonstrated and show good qualitative agreement with the experimental observations and heat transfer correlations. Finally, a study on Taylor bubble flow with heat transfer and phase change in a small vertical tube in axisymmetric coordinates is carried out using the new multiphase, phase change method.
Resumo:
This thesis argues that the study of narrative television has been limited by an adherence to accepted and commonplace conceptions of endings as derived from literary theory, particularly a preoccupation with the terminus of the text as the ultimate site of cohesion, structure, and meaning. Such common conceptions of endings, this thesis argues, are largely incompatible with the realities of television’s production and reception, and as a result the study of endings in television needs to be re-thought to pay attention to the specificities of the medium. In this regard, this thesis proposes a model of intra-narrative endings, islands of cohesion, structure, and meaning located within television texts, as a possible solution to the problem of endings in television. These intra-narrative endings maintain the functionality of traditional endings, whilst also allowing for the specificities of television as a narrative medium. The first two chapters set out the theoretical groundwork, first by exploring the essential characteristics of narrative television (serialisation, fragmentation, duration, repetition, and accumulation), then by exploring the unique relationship between narrative television and the forces of contingency. These chapters also introduce the concept of intra-narrative endings as a possible solution to the problems of television’s narrative structure, and the medium’s relationship to contingency. Following on from this my three case studies examine forms of television which have either been traditionally defined as particularly resistant to closure (soap opera and the US sitcom) or which have received little analysis in terms of their narrative structure (sports coverage). Each of these case studies provides contextual material on these televisual forms, situating them in terms of their narrative structure, before moving on to analyse them in terms of my concept of intra-narrative endings. In the case of soap opera, the chapter focusses on the death of the long running character Pat Butcher in the British soap EastEnders (BBC, 1985-), while my chapter on the US sitcom focusses on the varying levels of closure that can be located within the US sitcom, using Friends (NBC, 1993-2004) as a particular example. Finally, my chapter on sports coverage analyses the BBC’s coverage of the 2012 London Olympics, and focusses on the narratives surrounding cyclists Chris Hoy and Victoria Pendleton. Each of these case studies identifies their chosen events as intra-narrative endings within larger, ongoing texts, and analyses the various ways in which they operate within those wider texts. This thesis is intended to make a contribution to the emerging field of endings studies within television by shifting the understanding of endings away from a dominant literary model which overwhelmingly focusses on the terminus of the text, to a more televisually specific model which pays attention to the particular contexts of the medium’s production and reception.
Resumo:
Currently, due to part of world is focalized to petroleum, many researches with this theme have been advanced to make possible the production into reservoirs which were classified as unviable. Because of geological and operational challenges presented to oil recovery, more and more efficient methods which are economically successful have been searched. In this background, steam flood is in evidence mainly when it is combined with other procedures to purpose low costs and high recovery factors. This work utilized nitrogen as an alternative fluid after steam flood to adjust the best combination of alternation between these fluids in terms of time and rate injection. To describe the simplified economic profile, many analysis based on liquid cumulative production were performed. The completion interval and injection fluid rates were fixed and the oil viscosity was ranged at 300 cP, 1.000 cP and 3.000 cP. The results defined, for each viscosity, one specific model indicating the best period to stop the introduction of steam and insertion of nitrogen, when the first injected fluid reached its economic limit. Simulations in physics model defined from one-eighth nine-spot inverted were realized using the commercial simulator Steam, Thermal and Advanced Processes Reservoir Simulator STARS of Computer Modelling Group CMG
Resumo:
The interactions between host individual, host population, and environmental factors modulate parasite abundance in a given host population. Since adult exophilic ticks are highly aggregated in red deer (Cervus elaphus) and this ungulate exhibits significant sexual size dimorphism, life history traits and segregation, we hypothesized that tick parasitism on males and hinds would be differentially influenced by each of these factors. To test the hypothesis, ticks from 306 red deer-182 males and 124 females-were collected during 7 years in a red deer population in south-central Spain. By using generalized linear models, with a negative binomial error distribution and a logarithmic link function, we modeled tick abundance on deer with 20 potential predictors. Three models were developed: one for red deer males, another for hinds, and one combining data for males and females and including "sex" as factor. Our rationale was that if tick burdens on males and hinds relate to the explanatory factors in a differential way, it is not possible to precisely and accurately predict the tick burden on one sex using the model fitted on the other sex, or with the model that combines data from both sexes. Our results showed that deer males were the primary target for ticks, the weight of each factor differed between sexes, and each sex specific model was not able to accurately predict burdens on the animals of the other sex. That is, results support for sex-biased differences. The higher weight of host individual and population factors in the model for males show that intrinsic deer factors more strongly explain tick burden than environmental host-seeking tick abundance. In contrast, environmental variables predominated in the models explaining tick burdens in hinds.
Resumo:
Purpose – The purpose with this study is to investigate which factors that needs to be considered for sourcing decisions to ensure an optimal long-term decision, and which of these factors that can be quantified in a product costing model. To fulfill this purpose two research questions have been proposed: Which factors needs to be considered for a sourcing decision? Which of these factors that needs to be considered can be quantified in a product costing model? Method – A case study was conducted to fulfill the purpose of this study. The case study produced empirical data through interviews and document studies. The empirical data was interpreted and analyzed on the basis of the theoretical framework, created through literature studies. This process produced the result of this study. Findings – Factors to be considered for a sourcing decision have been identified and categorized in four over-arching categories: unit cost, logistical factors, capability factors and risk factors. These factors have been quantified in a product costing model. A preparatory decision model was created to further integrate some risk factors that could not be quantified. Implications – Both the make or buy decision and the manufacturing location decision have been considered in the product costing model presented in this study. The product costing model visualize and take into account hidden costs, rarely considered in sourcing decisions. This further enables optimal long-term sourcing decisions. Limitations – Risk factors remain difficult to quantify. This makes it difficult to determine the cost of risk factors, and as a result of that, to include them in a product costing model. Companies with similar conditions suites the model since the case study was conducted at only one company. Whether the product costing model is true for business in other contexts remain uncertain.
Resumo:
A partir de la dinámica evolutiva de la economía de las Tecnologías de la Información y las Comunicaciones y el establecimiento de estándares mínimos de velocidad en distintos contextos regulatorios a nivel mundial, en particular en Colombia, en el presente artículo se presentan diversas aproximaciones empíricas para evaluar los efectos reales que conlleva el establecimiento de definiciones de servicios de banda ancha en el mercado de Internet fijo. Con base en los datos disponibles para Colombia sobre los planes de servicios de Internet fijo ofrecidos durante el periodo 2006-2012, se estima para los segmentos residencial y corporativo el proceso de difusión logístico modificado y el modelo de interacción estratégica para identificar los impactos generados sobre la masificación del servicio a nivel municipal y sobre las decisiones estratégicas que adoptan los operadores, respectivamente. Respecto a los resultados, se encuentra, por una parte, que las dos medidas regulatorias establecidas en Colombia en 2008 y 2010 presentan efectos significativos y positivos sobre el desplazamiento y el crecimiento de los procesos de difusión a nivel municipal. Por otra parte, se observa sustituibilidad estratégica en las decisiones de oferta de velocidad de descarga por parte de los operadores corporativos mientras que, a partir del análisis de distanciamiento de la velocidad ofrecida respecto al estándar mínimo de banda ancha, se demuestra que los proveedores de servicios residenciales tienden a agrupar sus decisiones de velocidad alrededor de los niveles establecidos por regulación.
Resumo:
This paper describes the use of liaison to better integrate product model and assembly process model so as to enable sharing of design and assembly process information in a common integrated form and reason about them. Liaison can be viewed as a set, usually a pair, of features in proximity with which process information can be associated. A liaison is defined as a set of geometric entities on the parts being assembled and relations between these geometric entities. Liaisons have been defined for riveting, welding, bolt fastening, screw fastening, adhesive bonding (gluing) and blind fastening processes. The liaison captures process specific information through attributes associated with it. The attributes are associated with process details at varying levels of abstraction. A data structure for liaison has been developed to cluster the attributes of the liaison based on the level of abstraction. As information about the liaisons is not explicitly available in either the part model or the assembly model, algorithms have been developed for extracting liaisons from the assembly model. The use of liaison is proposed to enable both the construction of process model as the product model is fleshed out, as well as maintaining integrity of both product and process models as the inevitable changes happen to both design and the manufacturing environment during the product lifecycle. Results from aerospace and automotive domains have been provided to illustrate and validate the use of liaisons. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
The proposed research will focus on developing a novel approach to solve Software Service Evolution problems in Computing Clouds. The approach will support dynamic evolution of the software service in clouds via a set of discovered evolution patterns. An initial survey informed us that such an approach does not exist yet and is in urgent need. Evolution Requirement can be classified into evolution features; researchers can describe the whole requirement by using evolution feature typology, the typology will define the relation and dependency between each features. After the evolution feature typology has been constructed, evolution model will be created to make the evolution more specific. Aspect oriented approach can be used for enhance evolution feature-model modularity. Aspect template code generation technique will be used for model transformation in the end. Product Line Engineering contains all the essential components for driving the whole evolution process.
Resumo:
Software Product Line Engineering has significant advantages in family-based software development. The common and variable structure for all products of a family is defined through a Product-Line Architecture (PLA) that consists of a common set of reusable components and connectors which can be configured to build the different products. The design of PLA requires solutions for capturing such configuration (variability). The Flexible-PLA Model is a solution that supports the specification of external variability of the PLA configuration, as well as internal variability of components. However, a complete support for product-line development requires translating architecture specifications into code. This complex task needs automation to avoid human error. Since Model-Driven Development allows automatic code generation from models, this paper presents a solution to automatically generate AspectJ code from Flexible-PLA models previously configured to derive specific products. This solution is supported by a modeling framework and validated in a software factory.
Resumo:
Organisations have been approaching servitisation in an unstructured fashion. This is partially because there is insufficient understanding of the different types of Product-Service offerings. Therefore, a more detailed understanding of Product-Service types might advance the collective knowledge and assist organisations that are considering a servitisation strategy. Current models discuss specific aspects on the basis of few (or sometimes single) dimensions. In this paper, we develop a comprehensive model for classifying traditional and green Product-Service offerings, thus combining business and green offerings in a single model. We describe the model building process and its practical application in a case study. The model reveals the various traditional and green options available to companies and identifies how to compete between services; it allows servitisation positions to be identified such that a company may track its journey over time. Finally it fosters the introduction of innovative Product-Service Systems as promising business models to address environmental and social challenges. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
The purpose of this descriptive study was to evaluate the banking and insurance technology curriculum at ten junior colleges in Taiwan. The study focused on curriculum, curriculum materials, instruction, support services, student achievement and job performance. Data was collected from a diverse sample of faculty, students, alumni, and employers. ^ Questionnaires on the evaluation of curriculum at technical junior colleges were developed for use in this specific case. Data were collected from the sample described above and analyzed utilizing ANOVA, T-Tests and crosstabulations. Findings are presented which indicate that there is room for improvement in terms of meeting individual students' needs. ^ Using Stufflebeam's CIPP model for curriculum evaluation it was determined that the curriculum was adequate in terms of the knowledge and skills imparted to students. However, students were dissatisfied with the rigidity of the curriculum and the lack of opportunity to satisfy the individual needs of students. Employers were satisfied with both the academic preparation of students and their on the job performance. ^ In sum, the curriculum of the two-year banking and insurance technology programs of junior college in Taiwan was shown to have served adequately preparing a work force to enter businesses. It is now time to look toward the future and adapt the curriculum and instruction for the future needs of the ever evolving high-tech society. ^