862 resultados para Operational frequency
Resumo:
The impact of pronounced positive and negative sea surface temperature (STT) anomalies in the tropical Pacific associated with the El Niño/Southern Oscillation (ENSO) phenomenon on the atmospheric circulation in the Northern Hemisphere extratropics during the boreal winter season is investigated. This includes both the impact on the seasonal mean flow and on the intraseasonal variability on synoptic time scales. Moreover, the interaction between the transient fluctuations on these times scales and the mean circulation is examined. Both data from an ensemble of five simulations with the ECHAM3 atmospheric general circulation model at a horizontal resolution of T42 each covering the period from 1979 through 1992 and operational analyses from ECMWF for the corresponding period are examined. In each of the simulations observed SSTs for the period of investigation are given as lower boundary forcing, but different atmospheric initial conditions are prescribed. The simulations with ECHAM3 reveal a distinct impact of the pronounced SST-anomalies in the tropical Pacific on the atmospheric circulation in the Northern Hemisphere extratropics during El Niño as well as during La Niña events. These changes in the atmospheric circulation, which are found to be highly significant in the Pacific/North American as well as in the Atlantic/European region, are consistent with the essential results obtained from the analyses. The pronounced SST-anomalies in the tropical Pacific lead to changes in the mean circulation, which are characterized by typical circulation patterns. These changes in the mean circulation are accompanied by marked variations of the activity of the transient fluctuations on synoptic time scales, that are changes in both the kinetic energy on these time scales and the atmospheric transports of momentum and heat accomplished by the short baroclinic waves. The synoptic disturbances, on the other hand, play also an important role in controlling the changes in the mean circulation associated with the ENSO phenomenon. They maintain these typical circulation patterns via barotropic, but counteract them via baroclinic processes. The hypothesis of an impact of the ENSO phenomenon in the Atlantic/European region can be supported. As the determining factor the intensification (reduction) of the Aleutian low and the simultaneous reduction (intensification) of the Icelandic low during El Niño and during La Niña events respectively, is identified. The changes in the intensity of the Aleutian low during the ENSO-events are accompanied by an alteration of the transport of momentum caused by the short baroclinic waves over the North American continent in such a way that the changes in the intensity of the Icelandic low during El Niño as well as during La Niña events are maintained.
Resumo:
Radar refractivity retrievals can capture near-surface humidity changes, but noisy phase changes of the ground clutter returns limit the accuracy for both klystron- and magnetron-based systems. Observations with a C-band (5.6 cm) magnetron weather radar indicate that the correction for phase changes introduced by local oscillator frequency changes leads to refractivity errors no larger than 0.25 N units: equivalent to a relative humidity change of only 0.25% at 20°C. Requested stable local oscillator (STALO) frequency changes were accurate to 0.002 ppm based on laboratory measurements. More serious are the random phase change errors introduced when targets are not at the range-gate center and there are changes in the transmitter frequency (ΔfTx) or the refractivity (ΔN). Observations at C band with a 2-μs pulse show an additional 66° of phase change noise for a ΔfTx of 190 kHz (34 ppm); this allows the effect due to ΔN to be predicted. Even at S band with klystron transmitters, significant phase change noise should occur when a large ΔN develops relative to the reference period [e.g., ~55° when ΔN = 60 for the Next Generation Weather Radar (NEXRAD) radars]. At shorter wavelengths (e.g., C and X band) and with magnetron transmitters in particular, refractivity retrievals relative to an earlier reference period are even more difficult, and operational retrievals may be restricted to changes over shorter (e.g., hourly) periods of time. Target location errors can be reduced by using a shorter pulse or identified by a new technique making alternate measurements at two closely spaced frequencies, which could even be achieved with a dual–pulse repetition frequency (PRF) operation of a magnetron transmitter.
Resumo:
The objective of this study was to select the optimal operational conditions for the production of instant soy protein isolate (SPI) by pulsed fluid bed agglomeration. The spray-dried SPI was characterized as being a cohesive powder, presenting cracks and channeling formation during its fluidization (Geldart type A). The process was carried out in a pulsed fluid bed, and aqueous maltodextrin solution was used as liquid binder. Air pulsation, at a frequency of 600 rpm, was used to fluidize the cohesive SPI particles and to allow agglomeration to occur. Seventeen tests were performed according to a central composite design. Independent variables were (i) feed flow rate (0.5-3.5 g/min), (ii) atomizing air pressure (0.5-1.5 bar) and (iii) binder concentration (10-50%). Mean particle diameter, process yield and product moisture were analyzed as responses. Surface response analysis led to the selection of optimal operational parameters, following which larger granules with low moisture content and high process yield were produced. Product transformations were also evaluated by the analysis of size distribution, flowability, cohesiveness and wettability. When compared to raw material, agglomerated particles were more porous and had a more irregular shape, presenting a wetting time decrease, free-flow improvement and cohesiveness reduction. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
OBJETIVOS: A partir de acervo de 200 textos acadêmicos e de documentos de organismos nacionais e internacionais voltados ao controle da hanseníase publicados no período de 1999 a 2008, procurou-se estudar respectivas possibilidades evolutivas futuras, empregando-se os subsídios do recurso de análise de cenários. MÉTODOS: A reconstrução metodológica adotada foi de natureza qualitativa, fulcrada nas técnicas de revisão bibliográfica e análise de conteúdo. Esta última foi empregada na tipificação documental categorial frequencial contigencial, de acordo com devida fundamentação pertinente. RESULTADOS: Recuperaram-se elementos atuais importantes de natureza epidemiológica e operacional, bem como de respectivas perspectivas. CONCLUSÕES: Projeta-se que a manutenção dos coeficientes de incidência da doença coloca reptos econômicos e sanitários a desafiar desde o modelo neoliberal de organização societária mundial até competências específicas das ações das equipes de saúde em campo.
Resumo:
This work aimed to evaluate the influence of specific operational conditions on the performance of a spiral-wound ultrafiltration pilot plant for direct drinking water treatment, installed at the Guarapiranga's reservoir, in the Sao Paulo Metropolitan Region. Results from operational tests showed that the volume of permeate produced in the combination of periodic relaxation with flushing and chlorine dosage procedures was 49% higher than the volume obtained when these procedures were not used. Two years of continuous operation demonstrated that the ultrafiltration pilot plant performed better during fall and winter seasons, higher permeate flow production and reduced chemical cleanings frequency. Observed behavior seems to be associated with the algae bloom events in the reservoir, which are more frequent during spring and summer seasons, confirmed by chlorophyll-a analysis results. Concentrate clarification using ferric chloride was quite effective in removing NOM and turbidity, allowing its recirculation to the ultrafiltration feed tank. This procedure made it possible to reach almost 99% water recovery considering a single 54-hour recirculation cycle. Water quality monitoring demonstrated that the ultrafiltration pilot plant was quite efficient, and that potential pathogenic organisms, Escherichia coil and total coliforms, turbidity and apparent color removals were 100%, 95.1%, and 91.5%, respectively. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Among the experimental methods commonly used to define the behaviour of a full scale system, dynamic tests are the most complete and efficient procedures. A dynamic test is an experimental process, which would define a set of characteristic parameters of the dynamic behaviour of the system, such as natural frequencies of the structure, mode shapes and the corresponding modal damping values associated. An assessment of these modal characteristics can be used both to verify the theoretical assumptions of the project, to monitor the performance of the structural system during its operational use. The thesis is structured in the following chapters: The first introductive chapter recalls some basic notions of dynamics of structure, focusing the discussion on the problem of systems with multiply degrees of freedom (MDOF), which can represent a generic real system under study, when it is excited with harmonic force or in free vibration. The second chapter is entirely centred on to the problem of dynamic identification process of a structure, if it is subjected to an experimental test in forced vibrations. It first describes the construction of FRF through classical FFT of the recorded signal. A different method, also in the frequency domain, is subsequently introduced; it allows accurately to compute the FRF using the geometric characteristics of the ellipse that represents the direct input-output comparison. The two methods are compared and then the attention is focused on some advantages of the proposed methodology. The third chapter focuses on the study of real structures when they are subjected to experimental test, where the force is not known, like in an ambient or impact test. In this analysis we decided to use the CWT, which allows a simultaneous investigation in the time and frequency domain of a generic signal x(t). The CWT is first introduced to process free oscillations, with excellent results both in terms of frequencies, dampings and vibration modes. The application in the case of ambient vibrations defines accurate modal parameters of the system, although on the damping some important observations should be made. The fourth chapter is still on the problem of post processing data acquired after a vibration test, but this time through the application of discrete wavelet transform (DWT). In the first part the results obtained by the DWT are compared with those obtained by the application of CWT. Particular attention is given to the use of DWT as a tool for filtering the recorded signal, in fact in case of ambient vibrations the signals are often affected by the presence of a significant level of noise. The fifth chapter focuses on another important aspect of the identification process: the model updating. In this chapter, starting from the modal parameters obtained from some environmental vibration tests, performed by the University of Porto in 2008 and the University of Sheffild on the Humber Bridge in England, a FE model of the bridge is defined, in order to define what type of model is able to capture more accurately the real dynamic behaviour of the bridge. The sixth chapter outlines the necessary conclusions of the presented research. They concern the application of a method in the frequency domain in order to evaluate the modal parameters of a structure and its advantages, the advantages in applying a procedure based on the use of wavelet transforms in the process of identification in tests with unknown input and finally the problem of 3D modeling of systems with many degrees of freedom and with different types of uncertainty.
Resumo:
Slender and lighter footbridges are becoming more and more popular to meet the transportation demand and the aesthetical requirements of the modern society. The widespread presence of such particular structures has become possible thanks to the availability of new, lightweight and still capable of carrying heavy loads material . Therefore, these kind of structure, are particularly sensitive to vibration serviceability problems, especially induced by human activities. As a consequence, it has been imperative to study the dynamic behaviour of such slender pedestrian bridges in order to define their modal characteristics. As an alternative to a Finite Element Analysis to find natural frequencies, damping and mode shape, a so-called Operational Modal Analysis is a valid tool to obtain these parameters through an ambient vibration test. This work provides a useful insight into the Operational Modal Analysis technique and It reports the investigation of the CEME Skywalk, a pedestrian bridge located at the University of British Columbia, in Vancouver, Canada. Furthermore, human-induced vibration tests have been performed and the dynamic characteristics derived with these tests have been compared with the ones from the ambient vibration tests. The effect of the dynamic properties of the two buildings supporting the CEME Skywalk on the dynamic behaviour of the bridge has been also investigated.
Remission in schizophrenia: validity, frequency, predictors, and patients' perspective 5 years later
Resumo:
In March 2005, the Remission in Schizophrenia Working Group (RSWG) proposed a consensus definition of symptomatic remission in schizophrenia and developed specific operational criteria for its assessment. They pointed out, however, that the validity and the relationship to other outcome dimensions required further examination. This article reviews studies on the validity, frequency, and predictors of symptomatic remission in schizophrenia and studies on patients' perspectives. These studies have demonstrated that the RSWG remission criteria appear achievable and sustainable for a significant proportion of patients, and are related to a better overall symptomatic status and functional outcome and, to a less clear extent, to a better quality of life and cognitive performance. However, achieving symptomatic remission is not automatically concurrent with an adequate status in other outcome dimensions. The results of the present review suggest that the RSWG remission criteria are valid and useful. As such, they should be consistently applied in clinical trials. However the lack of consensus definitions of functional remission and adequate quality of life hampers research on their predictive validity on these outcome dimensions. Future research should therefore search for criteria of these dimensions and test whether the RSWG remission criteria consistently predict a "good" outcome with respect to functioning and quality of life.
Resumo:
The past decade has seen the energy consumption in servers and Internet Data Centers (IDCs) skyrocket. A recent survey estimated that the worldwide spending on servers and cooling have risen to above $30 billion and is likely to exceed spending on the new server hardware . The rapid rise in energy consumption has posted a serious threat to both energy resources and the environment, which makes green computing not only worthwhile but also necessary. This dissertation intends to tackle the challenges of both reducing the energy consumption of server systems and by reducing the cost for Online Service Providers (OSPs). Two distinct subsystems account for most of IDC’s power: the server system, which accounts for 56% of the total power consumption of an IDC, and the cooling and humidifcation systems, which accounts for about 30% of the total power consumption. The server system dominates the energy consumption of an IDC, and its power draw can vary drastically with data center utilization. In this dissertation, we propose three models to achieve energy effciency in web server clusters: an energy proportional model, an optimal server allocation and frequency adjustment strategy, and a constrained Markov model. The proposed models have combined Dynamic Voltage/Frequency Scaling (DV/FS) and Vary-On, Vary-off (VOVF) mechanisms that work together for more energy savings. Meanwhile, corresponding strategies are proposed to deal with the transition overheads. We further extend server energy management to the IDC’s costs management, helping the OSPs to conserve, manage their own electricity cost, and lower the carbon emissions. We have developed an optimal energy-aware load dispatching strategy that periodically maps more requests to the locations with lower electricity prices. A carbon emission limit is placed, and the volatility of the carbon offset market is also considered. Two energy effcient strategies are applied to the server system and the cooling system respectively. With the rapid development of cloud services, we also carry out research to reduce the server energy in cloud computing environments. In this work, we propose a new live virtual machine (VM) placement scheme that can effectively map VMs to Physical Machines (PMs) with substantial energy savings in a heterogeneous server cluster. A VM/PM mapping probability matrix is constructed, in which each VM request is assigned with a probability running on PMs. The VM/PM mapping probability matrix takes into account resource limitations, VM operation overheads, server reliability as well as energy effciency. The evolution of Internet Data Centers and the increasing demands of web services raise great challenges to improve the energy effciency of IDCs. We also express several potential areas for future research in each chapter.
Resumo:
BACKGROUND Kidney recipients maintaining a prolonged allograft survival in the absence of immunosuppressive drugs and without evidence of rejection are supposed to be exceptional. The ERA-EDTA-DESCARTES working group together with Nantes University launched a European-wide survey to identify new patients, describe them and estimate their frequency for the first time. METHODS Seventeen coordinators distributed a questionnaire in 256 transplant centres and 28 countries in order to report as many 'operationally tolerant' patients (TOL; defined as having a serum creatinine <1.7 mg/dL and proteinuria <1 g/day or g/g creatinine despite at least 1 year without any immunosuppressive drug) and 'almost tolerant' patients (minimally immunosuppressed patients (MIS) receiving low-dose steroids) as possible. We reported their number and the total number of kidney transplants performed at each centre to calculate their frequency. RESULTS One hundred and forty-seven questionnaires were returned and we identified 66 TOL (61 with complete data) and 34 MIS patients. Of the 61 TOL patients, 26 were previously described by the Nantes group and 35 new patients are presented here. Most of them were noncompliant patients. At data collection, 31/35 patients were alive and 22/31 still TOL. For the remaining 9/31, 2 were restarted on immunosuppressive drugs and 7 had rising creatinine of whom 3 resumed dialysis. Considering all patients, 10-year death-censored graft survival post-immunosuppression weaning reached 85% in TOL patients and 100% in MIS patients. With 218 913 kidney recipients surveyed, cumulative incidences of operational tolerance and almost tolerance were estimated at 3 and 1.5 per 10 000 kidney recipients, respectively. CONCLUSIONS In kidney transplantation, operational tolerance and almost tolerance are infrequent findings associated with excellent long-term death-censored graft survival.
Resumo:
In pressure irrigation-water distribution networks, pressure regulating devices for controlling the discharged flow rate by irrigation units are needed due to the variability of flow rate. In addition, applied water volume is used controlled operating the valve during a calculated time interval, and assuming constant flow rate. In general, a pressure regulating valve PRV is the commonly used pressure regulating device in a hydrant, which, also, executes the open and close function. A hydrant feeds several irrigation units, requiring a wide range in flow rate. In addition, some flow meters are also available, one as a component of the hydrant and the rest are placed downstream. Every land owner has one flow meter for each group of field plots downstream the hydrant. Its lecture could be used for refining the water balance but its accuracy must be taken into account. Ideal PRV performance would maintain a constant downstream pressure. However, the true performance depends on both upstream pressure and the discharged flow rate. The objective of this work is to asses the influence of the performance on the applied volume during the whole irrigation events in a year. The results of the study have been obtained introducing the flow rate into a PRV model. Variations on flow rate are simulated by taking into account the consequences of variations on climate conditions and also decisions in irrigation operation, such us duration and frequency application. The model comprises continuity, dynamic and energy equations of the components of the PRV.
Resumo:
The modal analysis of a structural system consists on computing its vibrational modes. The experimental way to estimate these modes requires to excite the system with a measured or known input and then to measure the system output at different points using sensors. Finally, system inputs and outputs are used to compute the modes of vibration. When the system refers to large structures like buildings or bridges, the tests have to be performed in situ, so it is not possible to measure system inputs such as wind, traffic, . . .Even if a known input is applied, the procedure is usually difficult and expensive, and there are still uncontrolled disturbances acting at the time of the test. These facts led to the idea of computing the modes of vibration using only the measured vibrations and regardless of the inputs that originated them, whether they are ambient vibrations (wind, earthquakes, . . . ) or operational loads (traffic, human loading, . . . ). This procedure is usually called Operational Modal Analysis (OMA), and in general consists on to fit a mathematical model to the measured data assuming the unobserved excitations are realizations of a stationary stochastic process (usually white noise processes). Then, the modes of vibration are computed from the estimated model. The first issue investigated in this thesis is the performance of the Expectation- Maximization (EM) algorithm for the maximum likelihood estimation of the state space model in the field of OMA. The algorithm is described in detail and it is analysed how to apply it to vibration data. After that, it is compared to another well known method, the Stochastic Subspace Identification algorithm. The maximum likelihood estimate enjoys some optimal properties from a statistical point of view what makes it very attractive in practice, but the most remarkable property of the EM algorithm is that it can be used to address a wide range of situations in OMA. In this work, three additional state space models are proposed and estimated using the EM algorithm: • The first model is proposed to estimate the modes of vibration when several tests are performed in the same structural system. Instead of analyse record by record and then compute averages, the EM algorithm is extended for the joint estimation of the proposed state space model using all the available data. • The second state space model is used to estimate the modes of vibration when the number of available sensors is lower than the number of points to be tested. In these cases it is usual to perform several tests changing the position of the sensors from one test to the following (multiple setups of sensors). Here, the proposed state space model and the EM algorithm are used to estimate the modal parameters taking into account the data of all setups. • And last, a state space model is proposed to estimate the modes of vibration in the presence of unmeasured inputs that cannot be modelled as white noise processes. In these cases, the frequency components of the inputs cannot be separated from the eigenfrequencies of the system, and spurious modes are obtained in the identification process. The idea is to measure the response of the structure corresponding to different inputs; then, it is assumed that the parameters common to all the data correspond to the structure (modes of vibration), and the parameters found in a specific test correspond to the input in that test. The problem is solved using the proposed state space model and the EM algorithm. Resumen El análisis modal de un sistema estructural consiste en calcular sus modos de vibración. Para estimar estos modos experimentalmente es preciso excitar el sistema con entradas conocidas y registrar las salidas del sistema en diferentes puntos por medio de sensores. Finalmente, los modos de vibración se calculan utilizando las entradas y salidas registradas. Cuando el sistema es una gran estructura como un puente o un edificio, los experimentos tienen que realizarse in situ, por lo que no es posible registrar entradas al sistema tales como viento, tráfico, . . . Incluso si se aplica una entrada conocida, el procedimiento suele ser complicado y caro, y todavía están presentes perturbaciones no controladas que excitan el sistema durante el test. Estos hechos han llevado a la idea de calcular los modos de vibración utilizando sólo las vibraciones registradas en la estructura y sin tener en cuenta las cargas que las originan, ya sean cargas ambientales (viento, terremotos, . . . ) o cargas de explotación (tráfico, cargas humanas, . . . ). Este procedimiento se conoce en la literatura especializada como Análisis Modal Operacional, y en general consiste en ajustar un modelo matemático a los datos registrados adoptando la hipótesis de que las excitaciones no conocidas son realizaciones de un proceso estocástico estacionario (generalmente ruido blanco). Posteriormente, los modos de vibración se calculan a partir del modelo estimado. El primer problema que se ha investigado en esta tesis es la utilización de máxima verosimilitud y el algoritmo EM (Expectation-Maximization) para la estimación del modelo espacio de los estados en el ámbito del Análisis Modal Operacional. El algoritmo se describe en detalle y también se analiza como aplicarlo cuando se dispone de datos de vibraciones de una estructura. A continuación se compara con otro método muy conocido, el método de los Subespacios. Los estimadores máximo verosímiles presentan una serie de propiedades que los hacen óptimos desde un punto de vista estadístico, pero la propiedad más destacable del algoritmo EM es que puede utilizarse para resolver un amplio abanico de situaciones que se presentan en el Análisis Modal Operacional. En este trabajo se proponen y estiman tres modelos en el espacio de los estados: • El primer modelo se utiliza para estimar los modos de vibración cuando se dispone de datos correspondientes a varios experimentos realizados en la misma estructura. En lugar de analizar registro a registro y calcular promedios, se utiliza algoritmo EM para la estimación conjunta del modelo propuesto utilizando todos los datos disponibles. • El segundo modelo en el espacio de los estados propuesto se utiliza para estimar los modos de vibración cuando el número de sensores disponibles es menor que vi Resumen el número de puntos que se quieren analizar en la estructura. En estos casos es usual realizar varios ensayos cambiando la posición de los sensores de un ensayo a otro (múltiples configuraciones de sensores). En este trabajo se utiliza el algoritmo EM para estimar los parámetros modales teniendo en cuenta los datos de todas las configuraciones. • Por último, se propone otro modelo en el espacio de los estados para estimar los modos de vibración en la presencia de entradas al sistema que no pueden modelarse como procesos estocásticos de ruido blanco. En estos casos, las frecuencias de las entradas no se pueden separar de las frecuencias del sistema y se obtienen modos espurios en la fase de identificación. La idea es registrar la respuesta de la estructura correspondiente a diferentes entradas; entonces se adopta la hipótesis de que los parámetros comunes a todos los registros corresponden a la estructura (modos de vibración), y los parámetros encontrados en un registro específico corresponden a la entrada en dicho ensayo. El problema se resuelve utilizando el modelo propuesto y el algoritmo EM.
Resumo:
Operational Modal Analysis consists on estimate the modal parameters of a structure (natural frequencies, damping ratios and modal vectors) from output-only vibration measurements. The modal vectors can be only estimated where a sensor is placed, so when the number of available sensors is lower than the number of tested points, it is usual to perform several tests changing the position of the sensors from one test to the following (multiple setups of sensors): some sensors stay at the same position from setup to setup, and the other sensors change the position until all the tested points are covered. The permanent sensors are then used to merge the mode shape estimated at each setup (or partial modal vectors) into global modal vectors. Traditionally, the partial modal vectors are estimated independently setup by setup, and the global modal vectors are obtained in a postprocess phase. In this work we present two state space models that can be used to process all the recorded setups at the same time, and we also present how these models can be estimated using the maximum likelihood method. The result is that the global mode shape of each mode is obtained automatically, and subsequently, a single value for the natural frequency and damping ratio of the mode is computed. Finally, both models are compared using real measured data.
Resumo:
Conventional bioimpedance spectrometers measure resistance and reactance over a range of frequencies and, by application of a mathematical model for an equivalent circuit (the Cole model), estimate resistance at zero and infinite frequencies. Fitting of the experimental data to the model is accomplished by iterative, nonlinear curve fitting. An alternative fitting method is described that uses only the magnitude of the measured impedances at four selected frequencies. The two methods showed excellent agreement when compared using data obtained both from measurements of equivalent circuits and of humans. These results suggest that operational equivalence to a technically complex, frequency-scanning, phase-sensitive BIS analyser could be achieved from a simple four-frequency, impedance-only analyser.
Resumo:
Warehouse is an essential component in the supply chain, linking the chain partners and providing them with functions of product storage, inbound and outbound operations along with value-added processes. Allocation of warehouse resources should be efficient and effective to achieve optimum productivity and reduce operational costs. Radio frequency identification (RFID) is a technology capable of providing real-time information about supply chain operations. It has been used by warehousing and logistic enterprises to achieve reduced shrinkage, improved material handling and tracking as well as increased accuracy of data collection. However, both academics and practitioners express concerns about challenges to RFID adoption in the supply chain. This paper provides a comprehensive analysis of the problems encountered in RFID implementation at warehouses, discussing the theoretical and practical adoption barriers and causes of not achieving full potential of the technology. Lack of foreseeable return on investment (ROI) and high costs are the most commonly reported obstacles. Variety of standards and radio wave frequencies are identified as source of concern for decision makers. Inaccurate performance of the RFID within the warehouse environment is examined. Description of integration challenges between warehouse management system and RFID technology is given. The paper discusses the existing solutions to technological, investment and performance RFID adoption barriers. Factors to consider when implementing the RFID technology are given to help alleviate implementation problems. By illustrating the challenges of RFID in the warehouse environment and discussing possible solutions the paper aims to help both academics and practitioners to focus on key areas constituting an obstacle to the technology growth. As more studies will address these challenges, the realisation of RFID benefits for warehouses and supply chain will become a reality.