995 resultados para Pent-up demand


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Um dos resíduos gerados no processamento da mandioca (Manihot esculenta) é a manipueira, passível de tratamento por biodigestão anaeróbia. Este trabalho objetivou estudar o processo de partida de um biodigestor tipo plug-flow, tratando manipueira de duas maneiras: diminuindo-se gradativamente o tempo de retenção hidráulica (TRH) até se chegar ao tempo pré-estabelecido, quatro dias; ou mantendo-se o TRH fixo em quatro dias e aumentando-se gradativamente a concentração do afluente. O biodigestor, com capacidade 1980 mL, foi mantido a temperatura de 32ºC ± 1. Empregou-se como substrato manipueira e ajustou-se o pH entre 5,5 e 6,0. A primeira etapa foi caracterizada empregando-se TRH de 16,6; 13,6; 11,6 e 9,6 dias e 3,1; 2,0; 2,3 e 2,9 g DQO L-1 d-1 de carga orgânica, respectivamente. Na segunda etapa manteve-se TRH fixo, 4 dias, porém cargas orgânicas de 0,48, 0,86, 1,65 e 2,46 g DQO L-1 d-1. Determinaram-se no afluente e efluente, sólidos totais (ST) e sólidos voláteis (SV), demanda química de oxigênio (DQO), alcalinidade e acidez volátil. Na primeira etapa, melhores resultados foram observados trabalhando com TRH 9,6 dias e carga orgânica 2,9 g DQO L-1 d-1, quando houve redução de DQO, ST e SV de 60%, 44% e 60%, respectivamente. Na segunda etapa o TRH de 4 dias apresentou melhores resultados empregando-se carga orgânica de 0,86 g DQO L-1 d-1, houve redução de 71%, 58% e 79% de DQO, ST e SV, respectivamente. A partida do biodigestor plug-flow tratando manipueira, pode ser realizada tanto diminuindo-se o TRH, quanto mantendo-o fixo e aumentado-se a concentração do afluente.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: The demand for optimal esthetics has increased with the advance of the implant dentistry and with the desire for easier, safer and faster technique allowing predictable outcomes. Thus, the aim of this case report was to describe a combined approach for the treatment of a periodontally compromised tooth by means of atraumatic tooth extraction, immediate flapless implant placement, autogenous block and particulate bone graft followed by connective tissue graft and immediate provisionalization of the crown in the same operatory time. Case Report: A 27-year-old woman underwent the proposed surgical procedures for the treatment of her compromised maxillary right first premolar. The tooth was removed atraumatically with a periotome without incision. A dental implant was inserted 3 mm apical to the cement-enamel junction of the adjacent teeth enabling the ideal tridimensional implant position. An osteotomy was performed in the maxillary tuber for block bone graft harvesting that allowed the reconstruction of the alveolar buccal plate. Thereafter, an autogenous connective tissue graft was placed to increase both the horizontal and vertical dimensions of the alveolar socket reaching the patient functional and esthetic expectations. Conclusion: This treatment protocol was efficient to create a harmonious gingival architecture with sufficient width and thickness, maintaining the stability of the alveolar bone crest yielding excellent aesthetic results after 2-years of follow-up. We suggest that this approach can be considered a viable alternative for the treatment of periodontally compromised tooth in the maxillary esthetic area enhancing patient comfort and satisfaction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] Iron is essential for oxygen transport because it is incorporated in the heme of the oxygen-binding proteins hemoglobin and myoglobin. An interaction between iron homeostasis and oxygen regulation is further suggested during hypoxia, in which hemoglobin and myoglobin syntheses have been reported to increase. This study gives new insights into the changes in iron content and iron-oxygen interactions during enhanced erythropoiesis by simultaneously analyzing blood and muscle samples in humans exposed to 7 to 9 days of high altitude hypoxia (HA). HA up-regulates iron acquisition by erythroid cells, mobilizes body iron, and increases hemoglobin concentration. However, contrary to our hypothesis that muscle iron proteins and myoglobin would also be up-regulated during HA, this study shows that HA lowers myoglobin expression by 35% and down-regulates iron-related proteins in skeletal muscle, as evidenced by decreases in L-ferritin (43%), transferrin receptor (TfR; 50%), and total iron content (37%). This parallel decrease in L-ferritin and TfR in HA occurs independently of increased hypoxia-inducible factor 1 (HIF-1) mRNA levels and unchanged binding activity of iron regulatory proteins, but concurrently with increased ferroportin mRNA levels, suggesting enhanced iron export. Thus, in HA, the elevated iron requirement associated with enhanced erythropoiesis presumably elicits iron mobilization and myoglobin down-modulation, suggesting an altered muscle oxygen homeostasis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Water is the driving force in nature. We use water for washing cars, doing laundry, cooking, taking a shower, but also to generate energy and electricity. Therefore water is a necessary product in our daily lives (USGS. Howard Perlman, 2013). The model that we created is based on the urban water demand computer model from the Pacific Institute (California). With this model we will forecast the future urban water use of Emilia Romagna up to the year of 2030. We will analyze the urban water demand in Emilia Romagna that includes the 9 provinces: Bologna, Ferrara, Forli-Cesena, Modena, Parma, Piacenza, Ravenna, Reggio Emilia and Rimini. The term urban water refers to the water used in cities and suburbs and in homes in the rural areas. This will include the residential, commercial, institutional and the industrial use. In this research, we will cover the water saving technologies that can help to save water for daily use. We will project what influence these technologies have to the urban water demand, and what it can mean for future urban water demands. The ongoing climate change can reduce the snowpack, and extreme floods or droughts in Italy. The changing climate and development patterns are expected to have a significant impact on water demand in the future. We will do this by conducting different scenario analyses, by combining different population projections, climate influence and water saving technologies. In addition, we will also conduct a sensitivity analyses. The several analyses will show us how future urban water demand is likely respond to changes in water conservation technologies, population, climate, water price and consumption. I hope the research can contribute to the insight of the reader’s thoughts and opinion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Demand for bio-fuels is expected to increase, due to rising prices of fossil fuels and concerns over greenhouse gas emissions and energy security. The overall cost of biomass energy generation is primarily related to biomass harvesting activity, transportation, and storage. With a commercial-scale cellulosic ethanol processing facility in Kinross Township of Chippewa County, Michigan about to be built, models including a simulation model and an optimization model have been developed to provide decision support for the facility. Both models track cost, emissions and energy consumption. While the optimization model provides guidance for a long-term strategic plan, the simulation model aims to present detailed output for specified operational scenarios over an annual period. Most importantly, the simulation model considers the uncertainty of spring break-up timing, i.e., seasonal road restrictions. Spring break-up timing is important because it will impact the feasibility of harvesting activity and the time duration of transportation restrictions, which significantly changes the availability of feedstock for the processing facility. This thesis focuses on the statistical model of spring break-up used in the simulation model. Spring break-up timing depends on various factors, including temperature, road conditions and soil type, as well as individual decision making processes at the county level. The spring break-up model, based on the historical spring break-up data from 27 counties over the period of 2002-2010, starts by specifying the probability distribution of a particular county’s spring break-up start day and end day, and then relates the spring break-up timing of the other counties in the harvesting zone to the first county. In order to estimate the dependence relationship between counties, regression analyses, including standard linear regression and reduced major axis regression, are conducted. Using realizations (scenarios) of spring break-up generated by the statistical spring breakup model, the simulation model is able to probabilistically evaluate different harvesting and transportation plans to help the bio-fuel facility select the most effective strategy. For early spring break-up, which usually indicates a longer than average break-up period, more log storage is required, total cost increases, and the probability of plant closure increases. The risk of plant closure may be partially offset through increased use of rail transportation, which is not subject to spring break-up restrictions. However, rail availability and rail yard storage may then become limiting factors in the supply chain. Rail use will impact total cost, energy consumption, system-wide CO2 emissions, and the reliability of providing feedstock to the bio-fuel processing facility.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A 13-year-old male neutered domestic shorthaired cat had repeated syncopal episodes over a 6 month period, which had variable duration and continued to increase in frequency. Intermittent ventricular asystole, due to complete heart block, and hyperthyroidism were documented. As the syncopal episodes did not respond to a 4-week medical treatment and symptoms became severe, a transvenous ventricular demand pacemaker system (VVIM) was implanted via the external jugular vein. The unipolar lead was tunneled subcutaneously and connected with the generator in a preformed ventral abdominal muscle pocket. During follow up of 18-months there were no recurrences of the syncopal episodes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Clavicle reconstruction is a rare operation. In most cases a mid-shaft defect of the clavicle is bridged by using different grafting techniques or musculo-osteous flaps. In some clinical situations where reconstruction is not a suitable option claviculectomy as a salvation procedure has proven to be an acceptable solution. In the paediatric population the challenge of both the cosmetic and the functional result attempting reconstruction of large bone defects is of higher demand. To our knowledge, this is the first case of a successful clavicle reconstruction with a sufficient follow-up using a free vascularised fibula graft in a child. This case provides a technique description, considerations in the paediatric population, an overview of other techniques used, and a long-term follow-up.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Una Red de Procesadores Evolutivos o NEP (por sus siglas en ingles), es un modelo computacional inspirado por el modelo evolutivo de las celulas, específicamente por las reglas de multiplicación de las mismas. Esta inspiración hace que el modelo sea una abstracción sintactica de la manipulation de information de las celulas. En particu¬lar, una NEP define una maquina de cómputo teorica capaz de resolver problemas NP completos de manera eficiente en tóerminos de tiempo. En la praóctica, se espera que las NEP simuladas en móaquinas computacionales convencionales puedan resolver prob¬lemas reales complejos (que requieran ser altamente escalables) a cambio de una alta complejidad espacial. En el modelo NEP, las cóelulas estóan representadas por palabras que codifican sus secuencias de ADN. Informalmente, en cualquier momento de cómputo del sistema, su estado evolutivo se describe como un coleccion de palabras, donde cada una de ellas representa una celula. Estos momentos fijos de evolucion se denominan configuraciones. De manera similar al modelo biologico, las palabras (celulas) mutan y se dividen en base a bio-operaciones sencillas, pero solo aquellas palabras aptas (como ocurre de forma parecida en proceso de selection natural) seran conservadas para la siguiente configuracióon. Una NEP como herramienta de computation, define una arquitectura paralela y distribuida de procesamiento simbolico, en otras palabras, una red de procesadores de lenguajes. Desde el momento en que el modelo fue propuesto a la comunidad científica en el año 2001, múltiples variantes se han desarrollado y sus propiedades respecto a la completitud computacional, eficiencia y universalidad han sido ampliamente estudiadas y demostradas. En la actualidad, por tanto, podemos considerar que el modelo teórico NEP se encuentra en el estadio de la madurez. La motivación principal de este Proyecto de Fin de Grado, es proponer una aproxi-mación práctica que permita dar un salto del modelo teórico NEP a una implantación real que permita su ejecucion en plataformas computacionales de alto rendimiento, con el fin de solucionar problemas complejos que demanda la sociedad actual. Hasta el momento, las herramientas desarrolladas para la simulation del modelo NEP, si bien correctas y con resultados satisfactorios, normalmente estón atadas a su entorno de ejecucion, ya sea el uso de hardware específico o implementaciones particulares de un problema. En este contexto, el propósito fundamental de este trabajo es el desarrollo de Nepfix, una herramienta generica y extensible para la ejecucion de cualquier algo¬ritmo de un modelo NEP (o alguna de sus variantes), ya sea de forma local, como una aplicación tradicional, o distribuida utilizando los servicios de la nube. Nepfix es una aplicacion software desarrollada durante 7 meses y que actualmente se encuentra en su segunda iteration, una vez abandonada la fase de prototipo. Nepfix ha sido disenada como una aplicacion modular escrita en Java 8 y autocontenida, es decir, no requiere de un entorno de ejecucion específico (cualquier maquina virtual de Java es un contenedor vólido). Nepfix contiene dos componentes o móodulos. El primer móodulo corresponde a la ejecución de una NEP y es por lo tanto, el simulador. Para su desarrollo, se ha tenido en cuenta el estado actual del modelo, es decir, las definiciones de los procesadores y filtros mas comunes que conforman la familia del modelo NEP. Adicionalmente, este componente ofrece flexibilidad en la ejecucion, pudiendo ampliar las capacidades del simulador sin modificar Nepfix, usando para ello un lenguaje de scripting. Dentro del desarrollo de este componente, tambióen se ha definido un estóandar de representacióon del modelo NEP basado en el formato JSON y se propone una forma de representation y codificación de las palabras, necesaria para la comunicación entre servidores. Adicional-mente, una característica importante de este componente, es que se puede considerar una aplicacion aislada y por tanto, la estrategia de distribution y ejecución son total-mente independientes. El segundo moódulo, corresponde a la distribucióon de Nepfix en la nube. Este de-sarrollo es el resultado de un proceso de i+D, que tiene una componente científica considerable. Vale la pena resaltar el desarrollo de este modulo no solo por los resul-tados prócticos esperados, sino por el proceso de investigation que se se debe abordar con esta nueva perspectiva para la ejecución de sistemas de computación natural. La principal característica de las aplicaciones que se ejecutan en la nube es que son gestionadas por la plataforma y normalmente se encapsulan en un contenedor. En el caso de Nepfix, este contenedor es una aplicacion Spring que utiliza el protocolo HTTP o AMQP para comunicarse con el resto de instancias. Como valor añadido, Nepfix aborda dos perspectivas de implementation distintas (que han sido desarrolladas en dos iteraciones diferentes) del modelo de distribution y ejecucion, que tienen un impacto muy significativo en las capacidades y restricciones del simulador. En concreto, la primera iteration utiliza un modelo de ejecucion asincrono. En esta perspectiva asincrona, los componentes de la red NEP (procesadores y filtros) son considerados como elementos reactivos a la necesidad de procesar una palabra. Esta implementation es una optimization de una topologia comun en el modelo NEP que permite utilizar herramientas de la nube para lograr un escalado transparente (en lo ref¬erente al balance de carga entre procesadores) pero produce efectos no deseados como indeterminacion en el orden de los resultados o imposibilidad de distribuir eficiente-mente redes fuertemente interconectadas. Por otro lado, la segunda iteration corresponde al modelo de ejecucion sincrono. Los elementos de una red NEP siguen un ciclo inicio-computo-sincronizacion hasta que el problema se ha resuelto. Esta perspectiva sincrona representa fielmente al modelo teórico NEP pero el proceso de sincronizacion es costoso y requiere de infraestructura adicional. En concreto, se requiere un servidor de colas de mensajes RabbitMQ. Sin embargo, en esta perspectiva los beneficios para problemas suficientemente grandes superan a los inconvenientes, ya que la distribuciín es inmediata (no hay restricciones), aunque el proceso de escalado no es trivial. En definitiva, el concepto de Nepfix como marco computacional se puede considerar satisfactorio: la tecnología es viable y los primeros resultados confirman que las carac-terísticas que se buscaban originalmente se han conseguido. Muchos frentes quedan abiertos para futuras investigaciones. En este documento se proponen algunas aproxi-maciones a la solucion de los problemas identificados como la recuperacion de errores y la division dinamica de una NEP en diferentes subdominios. Por otra parte, otros prob-lemas, lejos del alcance de este proyecto, quedan abiertos a un futuro desarrollo como por ejemplo, la estandarización de la representación de las palabras y optimizaciones en la ejecucion del modelo síncrono. Finalmente, algunos resultados preliminares de este Proyecto de Fin de Grado han sido presentados recientemente en formato de artículo científico en la "International Work-Conference on Artificial Neural Networks (IWANN)-2015" y publicados en "Ad-vances in Computational Intelligence" volumen 9094 de "Lecture Notes in Computer Science" de Springer International Publishing. Lo anterior, es una confirmation de que este trabajo mas que un Proyecto de Fin de Grado, es solo el inicio de un trabajo que puede tener mayor repercusion en la comunidad científica. Abstract Network of Evolutionary Processors -NEP is a computational model inspired by the evolution of cell populations, which might model some properties of evolving cell communities at the syntactical level. NEP defines theoretical computing devices able to solve NP complete problems in an efficient manner. In this model, cells are represented by words which encode their DNA sequences. Informally, at any moment of time, the evolutionary system is described by a collection of words, where each word represents one cell. Cells belong to species and their community evolves according to mutations and division which are defined by operations on words. Only those cells are accepted as surviving (correct) ones which are represented by a word in a given set of words, called the genotype space of the species. This feature is analogous with the natural process of evolution. Formally, NEP is based on an architecture for parallel and distributed processing, in other words, a network of language processors. Since the date when NEP was pro¬posed, several extensions and variants have appeared engendering a new set of models named Networks of Bio-inspired Processors (NBP). During this time, several works have proved the computational power of NBP. Specifically, their efficiency, universality, and computational completeness have been thoroughly investigated. Therefore, we can say that the NEP model has reached its maturity. The main motivation for this End of Grade project (EOG project in short) is to propose a practical approximation that allows to close the gap between theoretical NEP model and a practical implementation in high performing computational platforms in order to solve some of high the high complexity problems society requires today. Up until now tools developed to simulate NEPs, while correct and successful, are usu¬ally tightly coupled to the execution environment, using specific software frameworks (Hadoop) or direct hardware usage (GPUs). Within this context the main purpose of this work is the development of Nepfix, a generic and extensible tool that aims to execute algorithms based on NEP model and compatible variants in a local way, similar to a traditional application or in a distributed cloud environment. Nepfix as an application was developed during a 7 month cycle and is undergoing its second iteration once the prototype period was abandoned. Nepfix is designed as a modular self-contained application written in Java 8, that is, no additional external dependencies are required and it does not rely on an specific execution environment, any JVM is a valid container. Nepfix is made of two components or modules. The first module corresponds to the NEP execution and therefore simulation. During the development the current state of the theoretical model was used as a reference including most common filters and processors. Additionally extensibility is provided by the use of Python as a scripting language to run custom logic. Along with the simulation a definition language for NEP has been defined based on JSON as well as a mechanisms to represent words and their possible manipulations. NEP simulator is isolated from distribution and as mentioned before different applications that include it as a dependency are possible, the distribution of NEPs is an example of this. The second module corresponds to executing Nepfix in the cloud. The development carried a heavy R&D process since this front was not explored by other research groups until now. It's important to point out that the development of this module is not focused on results at this point in time, instead we focus on feasibility and discovery of this new perspective to execute natural computing systems and NEPs specifically. The main properties of cloud applications is that they are managed by the platform and are encapsulated in a container. For Nepfix a Spring application becomes the container and the HTTP or AMQP protocols are used for communication with the rest of the instances. Different execution perspectives were studied, namely asynchronous and synchronous models were developed for solving different kind of problems using NEPs. Different limitations and restrictions manifest in both models and are explored in detail in the respective chapters. In conclusion we can consider that Nepfix as a computational framework is suc-cessful: Cloud technology is ready for the challenge and the first results reassure that the properties Nepfix project pursued were met. Many investigation branches are left open for future investigations. In this EOG implementation guidelines are proposed for some of them like error recovery or dynamic NEP splitting. On the other hand other interesting problems that were not in the scope of this project were identified during development like word representation standardization or NEP model optimizations. As a confirmation that the results of this work can be useful to the scientific com-munity a preliminary version of this project was published in The International Work- Conference on Artificial Neural Networks (IWANN) in May 2015. Development has not stopped since that point and while Nepfix in it's current state can not be consid¬ered a final product the most relevant ideas, possible problems and solutions that were produced during the seven months development cycle are worthy to be gathered and presented giving a meaning to this EOG work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In relation to the current interest on gas storage demand for environmental applications (e.g., gas transportation, and carbon dioxide capture) and for energy purposes (e.g., methane and hydrogen), high pressure adsorption (physisorption) on highly porous sorbents has become an attractive option. Considering that for high pressure adsorption, the sorbent requires both, high porosity and high density, the present paper investigates gas storage enhancement on selected carbon adsorbents, both on a gravimetric and on a volumetric basis. Results on carbon dioxide, methane, and hydrogen adsorption at room temperature (i.e., supercritical and subcritical gases) are reported. From the obtained results, the importance of both parameters (porosity and density) of the adsorbents is confirmed. Hence, the densest of the different carbon materials used is selected to study a scale-up gas storage system, with a 2.5 l cylinder tank containing 2.64 kg of adsorbent. The scale-up results are in agreement with the laboratory scale ones and highlight the importance of the adsorbent density for volumetric storage performances, reaching, at 20 bar and at RT, 376 g l-1, 104 g l-1, and 2.4 g l-1 for CO2, CH4,and H2, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this technical report is to quantify alternative energy demand and supply scenarios for ten southern and eastern Mediterranean countries up to 2030. The report presents the model-based results of four alternative scenarios that are broadly in line with the MEDPRO scenario specifications on regional integration and cooperation with the EU. The report analyses the main implications of the scenarios in the following areas: • final energy demand by sector (industry, households, services, agriculture and transport); • the evolution of the power generation mix, the development of renewable energy sources and electricity exports to the EU; • primary energy production and the balance of trade for hydrocarbons; • energy-related CO2 emissions; and • power generation costs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The most significant environmental change to support people who want to give up smoking is the legislation to ban smoking in public places. Following Scotland in March 2006, and Wales and Northern Ireland in April 2007, England moves one step closer to being smoke free on 1 July 2007, when it becomes illegal to smoke in almost every enclosed public place and workplace. Social marketing will be used to support this health promoting policy and will become more prominent in the design of health promotion campaigns of the future. Social marketing is not a new approach to promoting health but its adoption by the Government does represent a paradigm shift in the challenge to change public opinion and social norms. As a result some behaviours, like smoking or excessive alcohol consumption, will no longer be socially acceptable. The Department of Health has decided that social marketing should be used in England to guide all future health promotion efforts directed at achieving behavioural goals. This paradigm shift was announced in Chapter 2 of the “Choosing health” White Paper with its emphasis on the consumer, noting that a wide range of lifestyle choices are marketed to people, although health as a commodity itself has not been marketed. The DoH has an internal social marketing development unit to integrate social marketing principles into its work and ensure that providers deliver. The National Centre for Social Marketing has funding to provide ongoing support, to build capacity and capability in the workforce. This article describes the distinguishing features of the social marketing approach. It seeks to answer some questions. Is this really a new idea, a paradigm shift, or simply a change in terminology? What do the marketing principles offer that is new, or are they merely familiar ideas repackaged in marketing jargon? Will these principles be more effective than current health promotion practice and, if so, how does it work? Finally, what are the implications for community pharmacy?

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work concerns the developnent of a proton irduced X-ray emission (PIXE) analysis system and a multi-sample scattering chamber facility. The characteristics of the beam pulsing system and its counting rate capabilities were evaluated by observing the ion-induced X-ray emission from pure thick copper targets, with and without beam pulsing operation. The characteristic X-rays were detected with a high resolution Si(Li) detector coupled to a rrulti-channel analyser. The removal of the pile-up continuum by the use of the on-demand beam pulsing is clearly demonstrated in this work. This new on-demand pu1sirg system with its counting rate capability of 25, 18 and 10 kPPS corresponding to 2, 4 am 8 usec main amplifier time constant respectively enables thick targets to be analysed more readily. Reproducibility tests of the on-demard beam pulsing system operation were checked by repeated measurements of the system throughput curves, with and without beam pulsing. The reproducibility of the analysis performed using this system was also checked by repeated measurements of the intensity ratios from a number of standard binary alloys during the experimental work. A computer programme has been developed to evaluate the calculations of the X-ray yields from thick targets bornbarded by protons, taking into account the secondary X-ray yield production due to characteristic X-ray fluorescence from an element energetically higher than the absorption edge energy of the other element present in the target. This effect was studied on metallic binary alloys such as Fe/Ni and Cr/Fe. The quantitative analysis of Fe/Ni and Cr/Fe alloy samples to determine their elemental composition taking into account the enhancement has been demonstrated in this work. Furthermore, the usefulness of the Rutherford backscattering (R.B.S.) technique to obtain the depth profiles of the elements in the upper micron of the sample is discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Geography and retail store locations are inherently bound together; this study links food retail changes to systemic logistics changes in an emerging market. The later include raising income and education, access to a wide range of technologies, traffic and transport difficulties, lagging retail provision, changing family structure and roles, as well as changing food culture and taste. The study incorporates demand for premium products defined by Kapferer and Bastien [2009b. The Luxury Strategy. London: Kogan Page] as comprising a broad variety of higher quality and unique or distinctive products and brands including in grocery organic ranges, healthy options, allergy free selections, and international and gourmet/specialty products through an online grocery model (n = 356) that integrates a novel view of home delivery in Istanbul. More importantly from a logistic perspective our model incorporates any products from any online vendors broadening the range beyond listed items found in any traditional online supermarkets. Data collected via phone survey and analysed via structural equation modelling suggest that the offer of online premium products significantly affects consumers’ delivery logistics expectations. We discuss logistics operations and business management implications, identifying the emerging geography of logistic models which respond to consumers’ unmet expectations using multiple sourcing and consolidation points.