920 resultados para Cost control


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Empirical validity of the claim that overhead costs are driven not by production volume but by transactions resulting from production complexity is examined using data from 32 manufacturing plants from the electronics, machinery, and automobile components industries. Transactions are measured using number of engineering change orders, number of purchasing and production planning personnel, shop- floor area per part, and number of quality control and improvement personnel. Results indicate a strong positive relation between manufacturing overhead costs and both manufacturing transactions and production volume. Most of the variation in overhead costs, however, is explained by measures of manufacturing transactions, not volume.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cognitive radio (CR) is fast emerging as a promising technology that can meet the machine-to machine (M2M) communication requirements for spectrum utilization and power control for large number of machines/devices expected to be connected to the Internet-of Things (IoT). Power control in CR as a secondary user can been modelled as a non-cooperative game cost function to quantify and reduce its effects of interference while occupying the same spectrum as primary user without adversely affecting the required quality of service (QoS) in the network. In this paper a power loss exponent that factors in diverse operating environments for IoT is employed in the non-cooperative game cost function to quantify the required power of transmission in the network. The approach would enable various CRs to transmit with lesser power thereby saving battery consumption or increasing the number of secondary users thereby optimizing the network resources efficiently.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El presente artículo hace parte del desarrollo de un algoritmo que considera la influencia del relieve en la propagación electromagnética para un entorno semi-urbano, trabajando en la banda de UHF (300MHz-3GHz) utilizada en los actuales y futuros sistemas de comunicación inalámbricos (i.e. 800-900MHz en sistemas DAMPS/US-TDMA/IS-136 y GSM). El modelo base, el COST231-Walfisch- Ikegami, empleado en la investigación,demostró beneficios, junto con los Sistemas de Información Geográfica (SIG) y las Herramientas de Planificación, para el desarrollo de los estudios de propagación, estimación de coberturas y análisis de los principales factores que afectan la planificación de un sistema móvil celular. Aquí se describen los conceptos básicos utilizados para el desarrollo del algoritmo aplicado, las consideraciones sobre las cuales se llevaron a cabolas campañas de medidas y el proceso de validación de resultados que comprueban la utilidad del algoritmo desarrollado para la predicción de Path Loss. El trabajo se basó en la fusión del modelo de propagación COST231-Walfisch-Ikegami con laHerramienta de Planificación Cell-View (fundamentada en el SIG ArcView) y la realización de mediciones en la ciudad de Bucaramanga con una unidad móvil de radiocomunicacióny control del espectro radioeléctrico de propiedad del Ministerio de Comunicaciones de Colombia, seccional Bucaramanga. El estudio llevado a cabo para la banda del sistema IS- 136, mostró resultados de simulación y validación que permiten corroborarlas aproximaciones empleadas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The recently developed network-wide real-time signal control strategy TUC has been implemented in three traffic networks with quite different traffic and control infrastructure characteristics: Chania, Greece (23 junctions); Southampton, UK (53 junctions); and Munich, Germany (25 junctions), where it has been compared to the respective resident real-time signal control strategies TASS, SCOOT and BALANCE. After a short outline of TUC, the paper describes the three application networks; the application, demonstration and evaluation conditions; as well as the comparative evaluation results. The main conclusions drawn from this high-effort inter-European undertaking is that TUC is an easy-to-implement, inter-operable, low-cost real-time signal control strategy whose performance, after very limited fine-tuning, proved to be better or, at least, similar to the ones achieved by long-standing strategies that were in most cases very well fine-tuned over the years in the specific networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes how the recently developed network-wide real-time signal control strategy TUC has been implemented in three traffic networks with quite different traffic and control infrastructure characteristics: Chania, Greece (23 junctions); Southampton, U.K. (53 junctions); and Munich, Germany (25 junctions), where it has been compared to the respective resident real-time signal control strategies TASS, SCOOT and BALANCE. After a short outline of TUC, the paper describes the three application networks; the application, demonstration and evaluation conditions; as well as the comparative evaluation results. The main conclusions drawn from this high-effort inter-European undertaking is that TUC is an easy-to-implement, inter-operable, low-cost real-time signal control strategy whose performance, after limited fine-tuning, proved to be better or, at least, similar to the ones achieved by long-standing strategies that were in most cases very well fine-tuned over the years in the specific networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of supersonic engine inlets and external aerodynamic surfaces can be critically affected by shock wave / boundary layer interactions (SBLIs), whose severe adverse pressure gradients can cause boundary layer separation. Currently such problems are avoided primarily through the use of boundary layer bleed/suction which can be a source of significant performance degradation. This study investigates a novel type of flow control device called micro-vortex generators (µVGs) which may offer similar control benefits without the bleed penalties. µVGs have the ability to alter the near-wall structure of compressible turbulent boundary layers to provide increased mixing of high speed fluid which improves the boundary layer health when subjected to flow disturbance. Due to their small size,µVGs are embedded in the boundary layer which provide reduced drag compared to the traditional vortex generators while they are cost-effective, physically robust and do not require a power source. To examine the potential of µVGs, a detailed experimental and computational study of micro-ramps in a supersonic boundary layer at Mach 3 subjected to an oblique shock was undertaken. The experiments employed a flat plate boundary layer with an impinging oblique shock with downstream total pressure measurements. The moderate Reynolds number of 3,800 based on displacement thickness allowed the computations to use Large Eddy Simulations without the subgrid stress model (LES-nSGS). The LES predictions indicated that the shock changes the structure of the turbulent eddies and the primary vortices generated from the micro-ramp. Furthermore, they generally reproduced the experimentally obtained mean velocity profiles, unlike similarly-resolved RANS computations. The experiments and the LES results indicate that the micro-ramps, whose height is h≈0.5δ, can significantly reduce boundary layer thickness and improve downstream boundary layer health as measured by the incompressible shape factor, H. Regions directly behind the ramp centerline tended to have increased boundary layer thickness indicating the significant three-dimensionality of the flow field. Compared to baseline sizes, smaller micro-ramps yielded improved total pressure recovery. Moving the smaller ramps closer to the shock interaction also reduced the displacement thickness and the separated area. This effect is attributed to decreased wave drag and the closer proximity of the vortex pairs to the wall. In the second part of the study, various types of µVGs are investigated including micro-ramps and micro-vanes. The results showed that vortices generated from µVGs can partially eliminate shock induced flow separation and can continue to entrain high momentum flux for boundary layer recovery downstream. The micro-ramps resulted in thinner downstream displacement thickness in comparison to the micro-vanes. However, the strength of the streamwise vorticity for the micro-ramps decayed faster due to dissipation especially after the shock interaction. In addition, the close spanwise distance between each vortex for the ramp geometry causes the vortex cores to move upwards from the wall due to induced upwash effects. Micro-vanes, on the other hand, yielded an increased spanwise spacing of the streamwise vortices at the point of formation. This resulted in streamwise vortices staying closer to the wall with less circulation decay, and the reduction in overall flow separation is attributed to these effects. Two hybrid concepts, named “thick-vane” and “split-ramp”, were also studied where the former is a vane with side supports and the latter has a uniform spacing along the centerline of the baseline ramp. These geometries behaved similar to the micro-vanes in terms of the streamwise vorticity and the ability to reduce flow separation, but are more physically robust than the thin vanes. Next, Mach number effect on flow past the micro-ramps (h~0.5δ) are examined in a supersonic boundary layer at M=1.4, 2.2 and 3.0, but with no shock waves present. The LES results indicate that micro-ramps have a greater impact at lower Mach number near the device but its influence decays faster than that for the higher Mach number cases. This may be due to the additional dissipation caused by the primary vortices with smaller effective diameter at the lower Mach number such that their coherency is easily lost causing the streamwise vorticity and the turbulent kinetic energy to decay quickly. The normal distance between the vortex core and the wall had similar growth indicating weak correlation with the Mach number; however, the spanwise distance between the two counter-rotating cores further increases with lower Mach number. Finally, various µVGs which include micro-ramp, split-ramp and a new hybrid concept “ramped-vane” are investigated under normal shock conditions at Mach number of 1.3. In particular, the ramped-vane was studied extensively by varying its size, interior spacing of the device and streamwise position respect to the shock. The ramped-vane provided increased vorticity compared to the micro-ramp and the split-ramp. This significantly reduced the separation length downstream of the device centerline where a larger ramped-vane with increased trailing edge gap yielded a fully attached flow at the centerline of separation region. The results from coarse-resolution LES studies show that the larger ramped-vane provided the most reductions in the turbulent kinetic energy and pressure fluctuation compared to other devices downstream of the shock. Additional benefits include negligible drag while the reductions in displacement thickness and shape factor were seen compared to other devices. Increased wall shear stress and pressure recovery were found with the larger ramped-vane in the baseline resolution LES studies which also gave decreased amplitudes of the pressure fluctuations downstream of the shock.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fiji exports approximately 800 t year-1 of 'Solo Sunrise' papaya marketed as 'Fiji Red' to international markets which include New Zealand, Australia and Japan. The wet weather conditions from November to April each year result in a significant increase in fungal diseases present in Fiji papaya orchards. The two major pathogens that are causing significant post-harvest losses are: stem end rot (Phytophthora palmivora) and anthracnose (Colletotrichum spp.). The high incidence of post-harvest rots has led to increased rejection rates all along the supply chain, causing a reduction in income to farmers, exporters, importers and retailers of Fiji papaya. It has also undermined the superior quality reputation on the market. In response to this issue, the Fiji Papaya industry led by Nature's Way Cooperative, embarked on series of trials supported by the Australian Centre for International Agricultural Research (ACIAR) to determine the most effective and economical post-harvest control in Fiji papaya. Of all the treatments that were examined, a hot water dip treatment was selected by the industry as the most appropriate technology given the level of control that it provide, the cost effectiveness of the treatment and the fact that it was non-chemical. A commercial hot water unit that fits with the existing quarantine treatment and packing facilities has been designed and a cost benefit analysis for the investment carried out. This paper explores the research findings as well as the industry process that has led to the commercial uptake of this important technology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El presente trabajo empleó herramientas de hardware y software de licencia libre para el establecimiento de una estación base celular (BTS) de bajo costo y fácil implementación. Partiendo de conceptos técnicos que facilitan la instalación del sistema OpenBTS y empleando el hardware USRP N210 (Universal Software Radio Peripheral) permitieron desplegar una red análoga al estándar de telefonía móvil (GSM). Usando los teléfonos móviles como extensiones SIP (Session Initiation Protocol) desde Asterisk, logrando ejecutar llamadas entre los terminales, mensajes de texto (SMS), llamadas desde un terminal OpenBTS hacia otra operadora móvil, entre otros servicios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The majority of research work carried out in the field of Operations-Research uses methods and algorithms to optimize the pick-up and delivery problem. Most studies aim to solve the vehicle routing problem, to accommodate optimum delivery orders, vehicles etc. This paper focuses on green logistics approach, where existing Public Transport infrastructure capability of a city is used for the delivery of small and medium sized packaged goods thus, helping improve the situation of urban congestion and greenhouse gas emissions reduction. It carried out a study to investigate the feasibility of the proposed multi-agent based simulation model, for efficiency of cost, time and energy consumption. Multimodal Dijkstra Shortest Path algorithm and Nested Monte Carlo Search have been employed for a two-phase algorithmic approach used for generation of time based cost matrix. The quality of the tour is dependent on the efficiency of the search algorithm implemented for plan generation and route planning. The results reveal a definite advantage of using Public Transportation over existing delivery approaches in terms of energy efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With global markets and global competition, pressures are placed on manufacturing organizations to compress order fulfillment times, meet delivery commitments consistently and also maintain efficiency in operations to address cost issues. This chapter argues for a process perspective on planning, scheduling and control that integrates organizational planning structures, information systems as well as human decision makers. The chapter begins with a reconsideration of the gap between theory and practice, in particular for classical scheduling theory and hierarchical production planning and control. A number of the key studies of industrial practice are then described and their implications noted. A recent model of scheduling practice derived from a detailed study of real businesses is described. Socio-technical concepts are then introduced and their implications for the design and management of planning, scheduling and control systems are discussed. The implications of adopting a process perspective are noted along with insights from knowledge management. An overview is presented of a methodology for the (re-)design of planning, scheduling and control systems that integrates organizational, system and human perspectives. The most important messages from the chapter are then summarized.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La demanda de una producción de alimentos cada vez mayor a nivel mundial sumado a la tecnificación y al ritmo acelerado del progreso de las explotaciones agropecuarias actuales hacen que el ganado deba soportar elevadas presiones de producción aumentando los requerimientos de nutrientes. Este es el caso de los minerales considerados actualmente elementos esenciales para los animales, aunque tradicionalmente fueron definidos como los nutrientes pobres de la nutrición y alimentación animal. Actualmente se ha demostrado con evidencia clínica y productiva, el importante rol metabólico de los minerales en el animal sano y productivo, como también se ha definido qué elemento mineral y porcentaje del mismo es requerido para el normal funcionamiento del organismo. Los macro-minerales (calcio, magnesio, fósforo, sodio, potasio, cloro y azufre) y los oligo-minerales (cobre, zinc, hierro, selenio, cobalto, iodo, manganeso, molibdeno y cromo) son elementos esenciales y necesarios para transformar la proteína y la energía de los alimentos en componentes del organismo o en productos animales como leche, carne, crías, piel, lana. Además, ayudan al organismo a combatir las enfermedades, manteniendo al animal en buen estado de salud. Se ha considerado a los minerales como el tercer grupo limitante en la nutrición animal, siendo a su vez, el que mayor potencial y menor costo tiene para incrementar la producción del ganado. Los minerales desempeñan funciones tan importantes como ser constituyentes de la estructura ósea y dental, de tejidos blandos y líquidos corporales. Están involucrados en el funcionamiento celular, siendo activadores de más de trescientas enzimas, constituyentes esenciales de vitaminas, hormonas y pigmentos respiratorios y facilitando la actividad de los microorganismos del rumen. Cuando el aporte de minerales en la ración no es el adecuado en calidad y/o cantidad se originan las deficiencias minerales, encuadradas dentro de las enfermedades metabólicas o enfermedades de la producción. Estas han sido informadas en casi todo el mundo y son responsables de importantes pérdidas económicas en los rodeos de bovinos para carne. Las deficiencias y/o desequilibrios minerales pueden causar los siguientes trastornos en los animales: bajo porcentaje de parición, mayor número de servicios por concepción, abortos, retenciones placentarias, incremento del intervalo entre partos, baja producción de leche, menor peso al nacimiento y al destete, menor porcentaje de destete, menor ganancia de peso, mayor incidencia de enfermedades infecciosas, fracturas espontáneas, diarrea, deformación de huesos y mortandad. Así cobra importancia el diagnóstico mediante el análisis de la sangre de los animales, del pasto y el agua que consumen y la caracterización de estas deficiencias en primarias o secundarias con el objetivo de poder realizar un control de las mismas mediante un adecuado plan de suplementación mineral acorde a las necesidades de los distintos establecimientos agropecuarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Diabetes mellitus (DM) is now prevalent in many countries in sub- Saharan Africa, with associated health and socioeconomic consequences. Adherence to antidiabetic medications has been shown to improve glycaemic control, which subsequently improves both the short- and longterm prognosis of the disease. The main objective of this study was to assess the level of adherence to antidiabetic drugs among outpatients in a teaching hospital in southwestern Nigeria. Methods A cross-sectional study was carried out using the eight-item Morisky Medication Adherence Scale (MMAS-8) among diabetic patients attending the medical outpatients’ diabetes clinic of Ladoke Akintola University Teaching Hospital, in Ogbomosho, Oyo State in southwestern Nigeria, during a three-month period (October to December 2013). Results A total of 129 patients participated in the study with a male-to-female ratio of 1:1.5. Seventy-eight (60.5%) patients had systemic hypertension as a comorbid condition while the remaining were being managed for diabetes mellitus alone. Only 6 (4.7%) of the patients had type 1 DM while the remaining 123 (95.3%) were diagnosed with type 2 DM. Metformin was the most prescribed oral hypoglycaemic agent (n = 111, 58.7%) followed by glibenclamide (n = 49, 25.9%). Medication adherence was classified as good, medium, and poor for 52 (40.6%), 42 (32.8%), and 34 (26.6%) patients, respectively. Medication costs accounted for 72.3% of the total direct cost of DM in this study, followed by the cost of laboratory investigations (17.6%). Conclusion Adherence of diabetes patients in the study sample to their medications was satisfactory. There is a need for the integration of generic medicines into routine care as a way of further reducing the burden of healthcare expenditure on the patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A variable width pulse generator featuring more than 4-V peak amplitude and less than 10-ns FWHM is described. In this design the width of the pulses is controlled by means of the control signal slope. Thus, a variable transition time control circuit (TTCC) is also developed, based on the charge and discharge of a capacitor by means of two tunable current sources. Additionally, it is possible to activate/deactivate the pulses when required, therefore allowing the creation of any desired pulse pattern. Furthermore, the implementation presented here can be electronically controlled. In conclusion, due to its versatility, compactness and low cost it can be used in a wide variety of applications.