939 resultados para monitoring process mean and variance


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent studies have shown that the X̄chart with variable parameters (Vp X̄ chart) detects process shifts faster than the traditional X̄ chart. This article extends these studies for processes that are monitored by both, X̄ and R charts. Basically, the X̄ and R values establish if the control should or should not be relaxed. When the X̄ and R values fall in the central region the control is relaxed because one will wait more to take the next sample and/or the next sample will be smaller than usual. When the X̄ or R values fall in the warning region the control is tightened because one will wait less to take the next sample and the next sample will be larger than usual. The action limits are also made variable. This paper proposes to draw the action limits (for both charts) wider than usual, when the control is relaxed and narrower than usual when the control is tightened. The Vp feature improves the joint X̄ and R control chart performance in terms of the speed with which the process mean and/or variance shifts are detected. © 1998 IIE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION: Voluntary muscle activity, including swallowing, decreases during the night. The association between nocturnal awakenings and swallowing activity is under-researched with limited information on the frequency of swallows during awake and asleep periods. AIM: The aim of this study was to assess nocturnal swallowing activity and identify a cut-off predicting awake and asleep periods. METHODS: Patients undergoing impedance-pH monitoring as part of GERD work-up were asked to wear a wrist activity detecting device (Actigraph(®)) at night. Swallowing activity was quantified by analysing impedance changes in the proximal esophagus. Awake and asleep periods were determined using a validated scoring system (Sadeh algorithm). Receiver operating characteristics (ROC) analyses were performed to determine sensitivity, specificity and accuracy of swallowing frequency to identify awake and asleep periods. RESULTS: Data from 76 patients (28 male, 48 female; mean age 56 ± 15 years) were included in the analysis. The ROC analysis found that 0.33 sw/min (i.e. one swallow every 3 min) had the optimal sensitivity (78 %) and specificity (76 %) to differentiate awake from asleep periods. A swallowing frequency of 0.25 sw/min (i.e. one swallow every 4 min) was 93 % sensitive and 57 % specific to identify awake periods. A swallowing frequency of 1 sw/min was 20 % sensitive but 96 % specific in identifying awake periods. Impedance-pH monitoring detects differences in swallowing activity during awake and asleep periods. Swallowing frequency noticed during ambulatory impedance-pH monitoring can predict the state of consciousness during nocturnal periods

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This accessible, practice-oriented and compact text provides a hands-on introduction to the principles of market research. Using the market research process as a framework, the authors explain how to collect and describe the necessary data and present the most important and frequently used quantitative analysis techniques, such as ANOVA, regression analysis, factor analysis, and cluster analysis. An explanation is provided of the theoretical choices a market researcher has to make with regard to each technique, as well as how these are translated into actions in IBM SPSS Statistics. This includes a discussion of what the outputs mean and how they should be interpreted from a market research perspective. Each chapter concludes with a case study that illustrates the process based on real-world data. A comprehensive web appendix includes additional analysis techniques, datasets, video files and case studies. Several mobile tags in the text allow readers to quickly browse related web content using a mobile device.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: Primary 60J80, Secondary 62F12, 60G99.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

About 50 locations ('cold spots') where permafrost (Arctic and Antarctic) in situ monitoring has been taking place for many years or where field stations are currently established (through, for example the Canadian ADAPT program) have been identified. These sites have been proposed to WMO Polar Space Task Group as focus areas for future monitoring by satellite data. Seven monitoring transects spanning different permafrost types have been proposed in addition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent developments in automation, robotics and artificial intelligence have given a push to a wider usage of these technologies in recent years, and nowadays, driverless transport systems are already state-of-the-art on certain legs of transportation. This has given a push for the maritime industry to join the advancement. The case organisation, AAWA initiative, is a joint industry-academia research consortium with the objective of developing readiness for the first commercial autonomous solutions, exploiting state-of-the-art autonomous and remote technology. The initiative develops both autonomous and remote operation technology for navigation, machinery, and all on-board operating systems. The aim of this study is to develop a model with which to estimate and forecast the operational costs, and thus enable comparisons between manned and autonomous cargo vessels. The building process of the model is also described and discussed. Furthermore, the model’s aim is to track and identify the critical success factors of the chosen ship design, and to enable monitoring and tracking of the incurred operational costs as the life cycle of the vessel progresses. The study adopts the constructive research approach, as the aim is to develop a construct to meet the needs of a case organisation. Data has been collected through discussions and meeting with consortium members and researchers, as well as through written and internal communications material. The model itself is built using activity-based life cycle costing, which enables both realistic cost estimation and forecasting, as well as the identification of critical success factors due to the process-orientation adopted from activity-based costing and the statistical nature of Monte Carlo simulation techniques. As the model was able to meet the multiple aims set for it, and the case organisation was satisfied with it, it could be argued that activity-based life cycle costing is the method with which to conduct cost estimation and forecasting in the case of autonomous cargo vessels. The model was able to perform the cost analysis and forecasting, as well as to trace the critical success factors. Later on, it also enabled, albeit hypothetically, monitoring and tracking of the incurred costs. By collecting costs this way, it was argued that the activity-based LCC model is able facilitate learning from and continuous improvement of the autonomous vessel. As with the building process of the model, an individual approach was chosen, while still using the implementation and model building steps presented in existing literature. This was due to two factors: the nature of the model and – perhaps even more importantly – the nature of the case organisation. Furthermore, the loosely organised network structure means that knowing the case organisation and its aims is of great importance when conducting a constructive research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study objective was to evaluate the feasibility of interviews by cell phone as a complement to interviews by landline to estimate risk and protection factors for chronic non-communicable diseases. Adult cell phone users were evaluated by random digit dialing. Questions asked were: age, sex, education, race, marital status, ownership of landline and cell phones, health condition, weight and height, medical diagnosis of hypertension and diabetes, physical activity, diet, binge drinking and smoking. The estimates were calculated using post-stratification weights. The cell phone interview system showed a reduced capacity to reach elderly and low educated populations. The estimates of the risk and protection factors for chronic non-communicable diseases in cell phone interviews were equal to the estimates obtained by landline phone. Eligibility, success and refusal rates using the cell phone system were lower than those of the landline system, but loss and cost were much higher, suggesting it is unsatisfactory as a complementary method in such a context.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of the study was to evaluate the possible relationships between stress tolerance, training load, banal infections and salivary parameters during 4 weeks of regular training in fifteen basketball players. The Daily Analysis of Life Demands for Athletes` questionnaire (sources and symptoms of stress) and the Wisconsin Upper Respiratory Symptom Survey were used on a weekly basis. Salivary cortisol and salivary immunoglobulin A (SIgA) were collected at the beginning (before) and after the study, and measured by enzyme-linked immunosorbent assay (ELISA). Ratings of perceived exertion (training load) were also obtained. The results from ANOVA with repeated measures showed greater training loads, number of upper respiratory tract infection episodes and negative sensation to both symptoms and sources of stress, at week 2 (p < 0.05). Significant increases in cortisol levels and decreases in SIgA secretion rate were noted (before to after). Negative sensations to symptoms of stress at week 4 were inversely and significantly correlated with SIgA secretion rate. A positive and significant relationship between sources and symptoms of stress at week 4 and cortisol levels were verified. In summary, an approach incorporating in conjunction psychometric tools and salivary biomarkers could be an efficient means of monitoring reaction to stress in sport. Copyright (C) 2010 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Asymmetric discrete triangular distributions are introduced in order to extend the symmetric ones serving for discrete associated kernels in the nonparametric estimation for discrete functions. The extension from one to two orders around the mode provides a large family of discrete distributions having a finite support. Establishing a bridge between Dirac and discrete uniform distributions, some different shapes are also obtained and their properties are investigated. In particular, the mean and variance are pointed out. Applications to discrete kernel estimators are given with a solution to a boundary bias problem. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The sustainability of fast-growing tropical Eucalyptus plantations is of concern in a context of rising fertilizer costs, since large amounts of nutrients are removed with biomass every 6-7 years from highly weathered soils. A better understanding of the dynamics of tree requirements is required to match fertilization regimes to the availability of each nutrient in the soil. The nutrition of Eucalyptus plantations has been intensively investigated and many studies have focused on specific fluxes in the biogeochemical cycles of nutrients. However, studies dealing with complete cycles are scarce for the Tropics. The objective of this paper was to compare these cycles for Eucalyptus plantations in Congo and Brazil, with contrasting climates, soil properties, and management practices. The main features were similar in the two situations. Most nutrient fluxes were driven by crown establishment the two first years after planting and total biomass production thereafter. These forests were characterized by huge nutrient requirements: 155, 10, 52, 55 and 23 kg ha(-1) of N, P, K, Ca and Mg the first year after planting at the Brazilian study site, respectively. High growth rates the first months after planting were essential to take advantage of the large amounts of nutrients released into the soil solutions by organic matter mineralization after harvesting. This study highlighted the predominant role of biological and biochemical cycles over the geochemical cycle of nutrients in tropical Eucalyptus plantations and indicated the prime importance of carefully managing organic matter in these soils. Limited nutrient losses through deep drainage after clear-cutting in the sandy soils of the two study sites showed the remarkable efficiency of Eucalyptus trees in keeping limited nutrient pools within the ecosystem, even after major disturbances. Nutrient input-output budgets suggested that Eucalyptus plantations take advantage of soil fertility inherited from previous land uses and that long-term sustainability will require an increase in the inputs of certain nutrients. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hydrochemical processes involved in the development of hydromorphic Podzols are a major concern for the upper Amazon Basin because of the extent of the areas affected by such processes and the large amounts of organic carbon and associated metals exported to the rivers. The dynamics and chemical composition of ground and surface waters were studied along an Acrisol-Podzol sequence lying in an open depression of a plateau. Water levels were monitored along the sequence over a period of 2 years by means of piezometers. Water was sampled in zero-tension lysimeters for groundwater and for surface water in the drainage network of the depression. The pH and concentrations of organic carbon and major elements (Si, Fe and Al) were determined. The contrasted changes reported for concentrations of Si, organic carbon and metals (Fe, Al) mainly reflect the dynamics of the groundwater and the weathering conditions that prevail in the soils. Iron is released by the reductive dissolution of Fe oxides, mostly in the Bg horizons of the upslope Acrisols. It moves laterally under the control of hydraulic gradients and migrates through the iron-depleted Podzols where it is exported to the river network. Aluminium is released from the dissolution of Al-bearing minerals (gibbsite and kaolinite) at the margin of the podzolic area but is immobilized as organo-Al complexes in spodic horizons. In downslope positions, the quick recharge of the groundwater and large release of organic compounds lead to acidification and a loss of metals (mainly Al), previously stored in the Podzols.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the last decade, ambitious claims have been made in the management literature about the contribution of emotional intelligence to success and performance. Writers in this genre have predicted that individuals with high emotional intelligence perform better in all aspects of management. This paper outlines the development of a new emotional intelligence measure, the Workgroup Emotional Intelligence Profile, Version 3 (WEIP-3), which was designed specifically to profile the emotional intelligence of individuals in work teams. We applied the scale in a study of the link between emotional intelligence and two measures of team performance: team process effectiveness and team goal focus. The results suggest that the average level of emotional intelligence of team members, as measured by the WEIP-3, is reflected in the initial performance of teams. In our study, low emotional intelligence teams initially performed at a lower level than the high emotional intelligence teams. Over time, however, teams with low average emotional intelligence raised their performance to match that of teams with high emotional intelligence.

Relevância:

100.00% 100.00%

Publicador: