881 resultados para DDM Data Distribution Management testbed benchmark design implementation instance generator


Relevância:

50.00% 50.00%

Publicador:

Resumo:

The purpose of this research and development project was to develop a method, a design, and a prototype for gathering, managing, and presenting data about occupational injuries.^ State-of-the-art systems analysis and design methodologies were applied to the long standing problem in the field of occupational safety and health of processing workplace injuries data into information for safety and health program management as well as preliminary research about accident etiologies. The top-down planning and bottom-up implementation approach was utilized to design an occupational injury management information system. A description of a managerial control system and a comprehensive system to integrate safety and health program management was provided.^ The project showed that current management information systems (MIS) theory and methods could be applied successfully to the problems of employee injury surveillance and control program performance evaluation. The model developed in the first section was applied at The University of Texas Health Science Center at Houston (UTHSCH).^ The system in current use at the UTHSCH was described and evaluated, and a prototype was developed for the UTHSCH. The prototype incorporated procedures for collecting, storing, and retrieving records of injuries and the procedures necessary to prepare reports, analyses, and graphics for management in the Health Science Center. Examples of reports, analyses, and graphics presenting UTHSCH and computer generated data were included.^ It was concluded that a pilot test of this MIS should be implemented and evaluated at the UTHSCH and other settings. Further research and development efforts for the total safety and health management information systems, control systems, component systems, and variable selection should be pursued. Finally, integration of the safety and health program MIS into the comprehensive or executive MIS was recommended. ^

Relevância:

50.00% 50.00%

Publicador:

Resumo:

A census of 925 U.S. colleges and universities offering masters and doctorate degrees was conducted in order to study the number of elements of an environmental management system as defined by ISO 14001 possessed by small, medium and large institutions. A 30% response rate was received with 273 responses included in the final data analysis. Overall, the number of ISO 14001 elements implemented among the 273 institutions ranged from 0 to 16, with a median of 12. There was no significant association between the number of elements implemented among institutions and the size of the institution (p = 0.18; Kruskal-Wallis test) or among USEPA regions (p = 0.12; Kruskal-Wallis test). The proportion of U.S. colleges and universities that reported having implemented a structured, comprehensive environmental management system, defined by answering yes to all 16 elements, was 10% (95% C.I. 6.6%–14.1%); however 38% (95% C.I. 32.0%–43.8%) reported that they had implemented a structured, comprehensive environmental management system, while 30.0% (95% C.I. 24.7%–35.9%) are planning to implement a comprehensive environmental management system within the next five years. Stratified analyses were performed by institution size, Carnegie Classification and job title. ^ The Osnabruck model, and another under development by the South Carolina Sustainable Universities Initiative, are the only two environmental management system models that have been proposed specifically for colleges and universities, although several guides are now available. The Environmental Management System Implementation Model for U.S. Colleges and Universities developed is an adaptation of the ISO 14001 standard and USEPA recommendations and has been tailored to U.S. colleges and universities for use in streamlining the implementation process. In using this implementation model created for the U.S. research and academic setting, it is hoped that these highly specialized institutions will be provided with a clearer and more cost-effective path towards the implementation of an EMS and greater compliance with local, state and federal environmental legislation. ^

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Objective. This research study had two goals: (1) to describe resource consumption patterns for Medi-Cal children with cystic fibrosis, and (2) to explore the feasibility from a rate design perspective of developing specialized managed care plans for such a special needs population.^ Background. Children with special health care needs (CSHN) comprise about 2% of the California Medicaid pediatric population. CSHN have rare but serious health problems, such as cystic fibrosis. Medicaid programs, including Medi-Cal, are enrolling more and more beneficiaries in managed care to control costs. CSHN, however, do not fit the wellness model underlying most managed care plans. Child health advocates believe that both efficiency and quality will suffer if CSHN are removed from regionalized special care centers and scattered among general purpose plans. They believe that CSHN should be "carved out" from enrollment in general plans. One alternative is the Specialized Managed Care Plan, tailored for CSHN.^ Methods. The study population consisted of children under age 21 with CF who were eligible for Medi-Cal and California Children's Services program (CCS) during 1991. Health Care Financing Administration (HCFA) Medicaid Tape-to-Tape data were analyzed as part of a California Children's Hospital Association (CCHA) project.^ Results. Mean Medi-Cal expenditures per month enrolled were $2,302 for 457 CF children, compared to about \$1,270 for all 47,000 CCS special needs children and roughly $60 for almost 2.6 million ``regular needs'' children. For CF children, inpatient care (80\%) and outpatient drugs (9\%) were the major cost drivers, with {\it all\/} outpatient visits comprising only 2\% of expenditures. About one-third of CF children were eligible due to AFDC (Aid to Families with Dependent Children). Age group explained about 17\% of all expenditure variation. Regression analysis was used to select the best capitation rate structure (rate cells by age and eligibility group). Sensitivity analysis estimated moderate financial risk for a statewide plan (360 enrollees), but severe risk for single county implementation due to small numbers of children.^ Conclusions. Study results support the carve out of CSHN due to unique expenditure patterns. The Specialized Managed Care Plan concept appears feasible from a rate design perspective given sufficient enrollees. ^

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The purpose of this study was to analyze the implementation of national family planning policy in the United States, which was embedded in four separate statutes during the period of study, Fiscal Years 1976-81. The design of the study utilized a modification of the Sabatier and Mazmanian framework for policy analysis, which defined implementation as the carrying out of statutory policy. The study was divided into two phases. The first part of the study compared the implementation of family planning policy by each of the pertinent statutes. The second part of the study identified factors that were associated with implementation of federal family planning policy within the context of block grants.^ Implemention was measured here by federal dollars spent for family planning, adjusted for the size of the respective state target populations. Expenditure data were collected from the Alan Guttmacher Institute and from each of the federal agencies having administrative authority for the four pertinent statutes, respectively. Data from the former were used for most of the analysis because they were more complete and more reliable.^ The first phase of the study tested the hypothesis that the coherence of a statute is directly related to effective implementation. Equity in the distribution of funds to the states was used to operationalize effective implementation. To a large extent, the results of the analysis supported the hypothesis. In addition to their theoretical significance, these findings were also significant for policymakers insofar they demonstrated the effectiveness of categorical legislation in implementing desired health policy.^ Given the current and historically intermittent emphasis on more state and less federal decision-making in health and human serives, the second phase of the study focused on state level factors that were associated with expenditures of social service block grant funds for family planning. Using the Sabatier-Mazmanian implementation model as a framework, many factors were tested. Those factors showing the strongest conceptual and statistical relationship to the dependent variable were used to construct a statistical model. Using multivariable regression analysis, this model was applied cross-sectionally to each of the years of the study. The most striking finding here was that the dominant determinants of the state spending varied for each year of the study (Fiscal Years 1976-1981). The significance of these results was that they provided empirical support of current implementation theory, showing that the dominant determinants of implementation vary greatly over time. ^

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This cross-sectional study is based on the qualitative and quantitative research design to review health policy decisions, their practice and implications during 2009 H1N1 influenza pandemic in the United States and globally. The “Future Pandemic Influenza Control (FPIC) related Strategic Management Plan” was developed based on the incorporation of the “National Strategy for Pandemic Influenza (2005)” for the United States from the U.S. Homeland Security Council and “The Canadian Pandemic Influenza Plan for the Health Sector (2006)” from the Canadian Pandemic Influenza Committee for use by the public health agencies in the United States as well as globally. The “global influenza experts’ survey” was primarily designed and administered via email through the “Survey Monkey” system to the 2009 H1N1 influenza pandemic experts as the study respondents. The effectiveness of this plan was confirmed and the approach of the study questionnaire was validated to be convenient and the excellent quality of the questions provided an efficient opportunity to the study respondents to evaluate the effectiveness of predefined strategies/interventions for future pandemic influenza control.^ The quantitative analysis of the responses to the Likert-scale based questions in the survey about predefined strategies/interventions, addressing five strategic issues to control future pandemic influenza. The effectiveness of strategies defined as pertinent interventions in this plan was evaluated by targeting five strategic issues regarding pandemic influenza control. For the first strategic issue pertaining influenza prevention and pre pandemic planning; the confirmed effectiveness (agreement) for strategy (1a) 87.5%, strategy (1b) 91.7% and strategy (1c) 83.3%. The assessment of the priority level for strategies to address the strategic issue no. (1); (1b (High Priority) > 1a (Medium Priority) > 1c (Low Priority) based on the available resources of the developing and developed countries. For the second Strategic Issue encompassing the preparedness and communication regarding pandemic influenza control; the confirmed effectiveness (agreement) for the strategy (2a) 95.6%, strategy (2b) 82.6%, strategy (2c) 91.3% and Strategy (2d) 87.0%. The assessment of the priority level for these strategies to address the strategic issue no. (2); (2a (highest priority) > 2c (high priority) >2d (medium priority) > 2b (low priority). For the third strategic issue encompassing the surveillance and detection of pandemic influenza; the confirmed effectiveness (agreement) for the strategy (3a) 90.9% and strategy (3b) 77.3%. The assessment of the priority level for theses strategies to address the strategic Issue No. (3) (3a (high priority) > 3b (medium/low priority). For the fourth strategic issue pertaining the response and containment of pandemic influenza; the confirmed effectiveness (agreement) for the strategy (4a) 63.6%, strategy (4b) 81.8%, strategy (4c) 86.3%, and strategy (4d) 86.4%. The assessment of the priority level for these strategies to address the strategic issue no. (4); (4d (highest priority) > 4c (high priority) > 4b (medium priority) > 4a (low priority). The fifth strategic issue about recovery from influenza and post pandemic planning; the confirmed effectiveness (agreement) for the strategy (5a) 68.2%, strategy (5b) 36.3% and strategy (5c) 40.9%. The assessment of the priority level for strategies to address the strategic issue no. (5); (5a (high priority) > 5c (medium priority) > 5b (low priority).^ The qualitative analysis of responses to the open-ended questions in the study questionnaire was performed by means of thematic content analysis. The following recurrent or common “themes” were determined for the future implementation of various predefined strategies to address five strategic issues from the “FPIC related Strategic Management Plan” to control future influenza pandemics. (1) Pre Pandemic Influenza Prevention, (2) Seasonal Influenza Control, (3) Cost Effectiveness of Non Pharmaceutical Interventions (NPI), (4) Raising Global Public Awareness, (5) Global Influenza Vaccination Campaigns, (6)Priority for High Risk Population, (7) Prompt Accessibility and Distribution of Influenza Vaccines and Antiviral Drugs, (8) The Vital Role of Private Sector, (9) School Based Influenza Containment, (10) Efficient Global Risk Communication, (11) Global Research Collaboration, (12) The Critical Role of Global Public Health Organizations, (13) Global Syndromic Surveillance and Surge Capacity and (14) Post Pandemic Recovery and Lessons Learned. The future implementation of these strategies with confirmed effectiveness to primarily “reduce the overall response time’ in the process of ‘early detection’, ‘strategies (interventions) formulation’ and their ‘implementation’ to eventually ensure the following health outcomes: (a) reduced influenza transmission, (b) prompt and effective influenza treatment and control, (c) reduced influenza related morbidity and mortality.^

Relevância:

50.00% 50.00%

Publicador:

Resumo:

OBJECTIVE. To determine the effectiveness of active surveillance cultures and associated infection control practices on the incidence of methicillin resistant Staphylococcus aureus (MRSA) in the acute care setting. DESIGN. A historical analysis of existing clinical data utilizing an interrupted time series design. ^ SETTING AND PARTICIPANTS. Patients admitted to a 260-bed tertiary care facility in Houston, TX between January 2005 through December 2010. ^ INTERVENTION. Infection control practices, including enhanced barrier precautions, compulsive hand hygiene, disinfection and environmental cleaning, and executive ownership and education, were simultaneously introduced during a 5-month intervention implementation period culminating with the implementation of active surveillance screening. Beginning June 2007, all high risk patients were cultured for MRSA nasal carriage within 48 hours of admission. Segmented Poisson regression was used to test the significance of the difference in incidence of healthcare-associated MRSA during the 29-month pre-intervention period compared to the 43-month post-intervention period. ^ RESULTS. A total of 9,957 of 11,095 high-risk patients (89.7%) were screened for MRSA carriage during the intervention period. Active surveillance cultures identified 1,330 MRSA-positive patients (13.4%) contributing to an admission prevalence of 17.5% in high-risk patients. The mean rate of healthcare-associated MRSA infection and colonization decreased from 1.1 per 1,000 patient-days in the pre-intervention period to 0.36 per 1,000 patient-days in the post-intervention period (P<0.001). The effect of the intervention in association with the percentage of S. aureus isolates susceptible to oxicillin were shown to be statistically significantly associated with the incidence of MRSA infection and colonization (IRR = 0.50, 95% CI = 0.31-0.80 and IRR = 0.004, 95% CI = 0.00003-0.40, respectively). ^ CONCLUSIONS. It can be concluded that aggressively targeting patients at high risk for colonization of MRSA with active surveillance cultures and associated infection control practices as part of a multifaceted, hospital-wide intervention is effective in reducing the incidence of healthcare-associated MRSA.^

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Reef managers cannot fight global warming through mitigation at local scale, but they can use information on thermal patterns to plan for reserve networks that maximize the probability of persistence of their reef system. Here we assess previous methods for the design of reserves for climate change and present a new approach to prioritize areas for conservation that leverages the most desirable properties of previous approaches. The new method moves the science of reserve design for climate change a step forwards by: (1) recognizing the role of seasonal acclimation in increasing the limits of environmental tolerance of corals and ameliorating the bleaching response; (2) including information from several bleaching events, which frequency is likely to increase in the future; (3) assessing relevant variability at country scales, where most management plans are carried out. We demonstrate the method in Honduras, where a reassessment of the marine spatial plan is in progress.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Nowadays computing platforms consist of a very large number of components that require to be supplied with diferent voltage levels and power requirements. Even a very small platform, like a handheld computer, may contain more than twenty diferent loads and voltage regulators. The power delivery designers of these systems are required to provide, in a very short time, the right power architecture that optimizes the performance, meets electrical specifications plus cost and size targets. The appropriate selection of the architecture and converters directly defines the performance of a given solution. Therefore, the designer needs to be able to evaluate a significant number of options in order to know with good certainty whether the selected solutions meet the size, energy eficiency and cost targets. The design dificulties of selecting the right solution arise due to the wide range of power conversion products provided by diferent manufacturers. These products range from discrete components (to build converters) to complete power conversion modules that employ diferent manufacturing technologies. Consequently, in most cases it is not possible to analyze all the alternatives (combinations of power architectures and converters) that can be built. The designer has to select a limited number of converters in order to simplify the analysis. In this thesis, in order to overcome the mentioned dificulties, a new design methodology for power supply systems is proposed. This methodology integrates evolutionary computation techniques in order to make possible analyzing a large number of possibilities. This exhaustive analysis helps the designer to quickly define a set of feasible solutions and select the best trade-off in performance according to each application. The proposed approach consists of two key steps, one for the automatic generation of architectures and other for the optimized selection of components. In this thesis are detailed the implementation of these two steps. The usefulness of the methodology is corroborated by contrasting the results using real problems and experiments designed to test the limits of the algorithms.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Accurate control over the spent nuclear fuel content is essential for its safe and optimized transportation, storage and management. Consequently, the reactivity of spent fuel and its isotopic content must be accurately determined. Nowadays, to predict isotopic evolution throughout irradiation and decay periods is not a problem thanks to the development of powerful codes and methodologies. In order to have a realistic confidence level in the prediction of spent fuel isotopic content, it is desirable to determine how uncertainties in the basic nuclear data affect isotopic prediction calculations by quantifying their associated uncertainties

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Accurate control over the spent nuclear fuel content is essential for its safe and optimized transportation, storage and management. Consequently, the reactivity of spent fuel and its isotopic content must be accurately determined.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

A useful strategy for improving disaster risk management is sharing spatial data across different technical organizations using shared information systems. However, the implementation of this type of system requires a large effort, so it is difficult to find fully implemented and sustainable information systems that facilitate sharing multinational spatial data about disasters, especially in developing countries. In this paper, we describe a pioneer system for sharing spatial information that we developed for the Andean Community. This system, called SIAPAD (Andean Information System for Disaster Prevention and Relief), integrates spatial information from 37 technical organizations in the Andean countries (Bolivia, Colombia, Ecuador, and Peru). SIAPAD was based on the concept of a thematic Spatial Data Infrastructure (SDI) and includes a web application, called GEORiesgo, which helps users to find relevant information with a knowledge-based system. In the paper, we describe the design and implementation of SIAPAD together with general conclusions and future directions which we learned as a result of this work.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This paper shows the results of a research aimed to formulate a general model for supporting the implementation and management of an urban road pricing scheme. After a preliminary work, to define the state of the art in the field of sustainable urban mobility strategies, the problem has been theoretically set up in terms of transport economy, introducing the external costs’ concept duly translated into the principle of pricing for the use of public infrastructures. The research is based on the definition of a set of direct and indirect indicators to qualify the urban areas by land use, mobility, environmental and economic conditions. These indicators have been calculated for a selected set of typical urban areas in Europe on the basis of the results of a survey carried out by means of a specific questionnaire. Once identified the most typical and interesting applications of the road pricing concept in cities such as London (Congestion Charging), Milan (Ecopass), Stockholm (Congestion Tax) and Rome (ZTL), a large benchmarking exercise and the cross analysis of direct and indirect indicators, has allowed to define a simple general model, guidelines and key requirements for the implementation of a pricing scheme based traffic restriction in a generic urban area. The model has been finally applied to the design of a road pricing scheme for a particular area in Madrid, and to the quantification of the expected results of its implementation from a land use, mobility, environmental and economic perspective.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This paper presents the design and results of the implementation of a model for the evaluation and improvement of maintenance management in industrial SMEs. A thorough review of the state of the art on maintenance management was conducted to determine the model variables; to characterize industrial SMEs, a questionnaire was developed with Likert variables collected in the previous step. Once validated the questionnaire, we applied the same to a group of seventy-five (75) SMEs in the industrial sector, located in Bolivar State, Venezuela. To identify the most relevant variables maintenance management, we used exploratory factor analysis technique applied to the data collected. The score obtained for all the companies evaluated (57% compliance), highlights the weakness of maintenance management in industrial SMEs, particularly in the areas of planning and continuous improvement; most SMEs are evaluated in corrective maintenance stage, and its performance standard only response to the occurrence of faults.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The electrical power distribution and commercialization scenario is evolving worldwide, and electricity companies, faced with the challenge of new information requirements, are demanding IT solutions to deal with the smart monitoring of power networks. Two main challenges arise from data management and smart monitoring of power networks: real-time data acquisition and big data processing over short time periods. We present a solution in the form of a system architecture that conveys real time issues and has the capacity for big data management.