908 resultados para High Reliability


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Durante los últimos años la utilización de los LEDs (Light Emitting Diodes) ha aumentado de forma muy importante siendo hoy en día una alternativa real a los sistemas de iluminación tradicionales. La iluminación basada en LEDs se está utilizando ampliamente en automoción, arquitectura, aplicaciones domésticas y señalización debido a su alta fiabilidad, pequeño tamaño y bajo consumo. La evaluación de la fiabilidad de los LEDs es un tema clave previo a la comercialización o a la puesta en marcha del LED en una nueva aplicación. La evaluación de la fiabilidad de dispositivos requiere ensayos acelerados para obtener resultados de fiabilidad en un periodo de tiempo aceptable, del orden de pocas semanas. En éste proyecto se estudia la fiabilidad de dos tipos diferentes de LEDs ultravioleta, que pueden sustituir a las lámparas UV convencionales, para diferentes condiciones de trabajo y diferentes condiciones ambientales. Se hace un seguimiento de la evolución de los LEDs UV durante cientos horas de ensayo acelerado para obtener resultados y conclusiones acerca de la degradación que sufren. La memoria del proyecto fin de carrera se ha estructurado en siete capítulos. Tres de ellos son teóricos, otros tres prácticos y finalmente uno sobre el presupuesto. El primero explica la introducción y la evolución del diodo LED, el segundo introduce la fiabilidad explicando los modelos más utilizados para analizar los ensayos y el tercero es un breve tema acerca de los ensayos acelerados. Los otros tres capítulos son orientados a los experimentos realizados en este Proyecto Fin de Carrera. Uno trata sobre la descripción del ensayo acelerado realizado, otro analiza los resultados obtenidos, el siguiente analiza las conclusiones y el último el presupuesto. ABSTRACT. For the last years, the use of LEDs (Light Emitting Diodes) has increased significantly, being nowadays a real alternative to traditional lighting systems. Lighting based on LEDs is being extensively used in automotive, domestic applications and signaling due to its high reliability small size and low power consumption. The evaluation of LEDs reliability is a key issue before marketing or launching a new application. The reliability evaluation of devices requires accelerated tests to obtain reliability results in an acceptable period of time, for the order of few weeks. In this project the reliability of two different types of UV LEDs, which can replace conventional UV lamps for different conditions and different environmental conditions is studied. The evolution of LEDs UV is tracked during hundred hours of accelerated test to obtain the results and conclusions about the degradation suffered. The memory of the final project has been structured into seven chapters. Three of them are theorical another three are experimental and the last one about estimates. The first explains the introduction and development of LED, the second introduces the reliability explaining the most used models to analyze the tests and the third is a brief topic about the accelerated tests. The other three chapters are oriented to the experiments done in this PFC. One explains the description of the accelerated test we have done, another analyzes the results obtained, the following one exposes the conclusions and the last one the estimates.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Las aplicaciones distribuidas que precisan de un servicio multipunto fiable son muy numerosas, y entre otras es posible citar las siguientes: bases de datos distribuidas, sistemas operativos distribuidos, sistemas de simulación interactiva distribuida y aplicaciones de distribución de software, publicaciones o noticias. Aunque en sus orígenes el dominio de aplicación de tales sistemas distribuidos estaba reducido a una única subred (por ejemplo una Red de Área Local) posteriormente ha surgido la necesidad de ampliar su aplicabilidad a interredes. La aproximación tradicional al problema del multipunto fiable en interredes se ha basado principalmente en los dos siguientes puntos: (1) proporcionar en un mismo protocolo muchas garantías de servicio (por ejemplo fiabilidad, atomicidad y ordenación) y a su vez algunas de éstas en distintos grados, sin tener en cuenta que muchas aplicaciones multipunto que precisan fiabilidad no necesitan otras garantías; y (2) extender al entorno multipunto las soluciones ya adoptadas en el entorno punto a punto sin considerar las características diferenciadoras; y de aquí, que se haya tratado de resolver el problema de la fiabilidad multipunto con protocolos extremo a extremo (protocolos de transporte) y utilizando esquemas de recuperación de errores, centralizados (las retransmisiones se hacen desde un único punto, normalmente la fuente) y globales (los paquetes solicitados se vuelven a enviar al grupo completo). En general, estos planteamientos han dado como resultado protocolos que son ineficientes en tiempo de ejecución, tienen problemas de escalabilidad, no hacen un uso óptimo de los recursos de red y no son adecuados para aplicaciones sensibles al retardo. En esta Tesis se investiga el problema de la fiabilidad multipunto en interredes operando en modo datagrama y se presenta una forma novedosa de enfocar el problema: es más óptimo resolver el problema de la fiabilidad multipunto a nivel de red y separar la fiabilidad de otras garantías de servicio, que pueden ser proporcionadas por un protocolo de nivel superior o por la propia aplicación. Siguiendo este nuevo enfoque se ha diseñado un protocolo multipunto fiable que opera a nivel de red (denominado RMNP). Las características más representativas del RMNP son las siguientes; (1) sigue una aproximación orientada al emisor, lo cual permite lograr un grado muy alto de fiabilidad; (2) plantea un esquema de recuperación de errores distribuido (las retransmisiones se hacen desde ciertos encaminadores intermedios que siempre estarán más cercanos a los miembros que la propia fuente) y de ámbito restringido (el alcance de las retransmisiones está restringido a un cierto número de miembros). Este esquema hace posible optimizar el retardo medio de distribución y disminuir la sobrecarga introducida por las retransmisiones; (3) incorpora en ciertos encaminadores funciones de agregación y filtrado de paquetes de control, que evitan problemas de implosión y reducen el tráfico que fluye hacia la fuente. Con el fin de evaluar el comportamiento del protocolo diseñado, se han realizado pruebas de simulación obteniéndose como principales conclusiones que, el RMNP escala correctamente con el tamaño del grupo, hace un uso óptimo de los recursos de red y es adecuado para aplicaciones sensibles al retardo.---ABSTRACT---There are many distributed applications that require a reliable multicast service, including: distributed databases, distributed operating systems, distributed interactive simulation systems and distribution applications of software, publications or news. Although the application domain of distributed systems of this type was originally confíned to a single subnetwork (for example, a Local Área Network), it later became necessary extend their applicability to internetworks. The traditional approach to the reliable multicast problem in internetworks is based mainly on the following two points: (1) provide a lot of service guarantees in one and the same protocol (for example, reliability, atomicity and ordering) and different levéis of guarantee in some cases, without taking into account that many multicast applications that require reliability do not need other guarantees, and (2) extend solutions adopted in the unicast environment to the multicast environment without taking into account their distinctive characteristics. So, the attempted solutions to the multicast reliability problem were end-to-end protocols (transport protocols) and centralized error recovery schemata (retransmissions made from a single point, normally the source) and global error retrieval schemata (the requested packets are retransmitted to the whole group). Generally, these approaches have resulted in protocols that are inefficient in execution time, have scaling problems, do not make optimum use of network resources and are not suitable for delay-sensitive applications. Here, the multicast reliability problem is investigated in internetworks operating in datagram mode and a new way of approaching the problem is presented: it is better to solve to the multicast reliability problem at network level and sepárate reliability from other service guarantees that can be supplied by a higher protocol or the application itself. A reliable multicast protocol that operates at network level (called RMNP) has been designed on the basis of this new approach. The most representative characteristics of the RMNP are as follows: (1) it takes a transmitter-oriented approach, which provides for a very high reliability level; (2) it provides for an error retrieval schema that is distributed (the retransmissions are made from given intermedíate routers that will always be closer to the members than the source itself) and of restricted scope (the scope of the retransmissions is confined to a given number of members), and this schema makes it possible to optimize the mean distribution delay and reduce the overload caused by retransmissions; (3) some routers include control packet aggregation and filtering functions that prevent implosión problems and reduce the traffic flowing towards the source. Simulation test have been performed in order to evalúate the behaviour of the protocol designed. The main conclusions are that the RMNP scales correctly with group size, makes optimum use of network resources and is suitable for delay-sensitive applications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

For the average citizen and the public, "earthquake prediction" means "short-term prediction," a prediction of a specific earthquake on a relatively short time scale. Such prediction must specify the time, place, and magnitude of the earthquake in question with sufficiently high reliability. For this type of prediction, one must rely on some short-term precursors. Examinations of strain changes just before large earthquakes suggest that consistent detection of such precursory strain changes cannot be expected. Other precursory phenomena such as foreshocks and nonseismological anomalies do not occur consistently either. Thus, reliable short-term prediction would be very difficult. Although short-term predictions with large uncertainties could be useful for some areas if their social and economic environments can tolerate false alarms, such predictions would be impractical for most modern industrialized cities. A strategy for effective seismic hazard reduction is to take full advantage of the recent technical advancements in seismology, computers, and communication. In highly industrialized communities, rapid earthquake information is critically important for emergency services agencies, utilities, communications, financial companies, and media to make quick reports and damage estimates and to determine where emergency response is most needed. Long-term forecast, or prognosis, of earthquakes is important for development of realistic building codes, retrofitting existing structures, and land-use planning, but the distinction between short-term and long-term predictions needs to be clearly communicated to the public to avoid misunderstanding.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A avaliação perceptivo-auditiva tem papel fundamental no estudo e na avaliação da voz, no entanto, por ser subjetiva está sujeita a imprecisões e variações. Por outro lado, a análise acústica permite a reprodutibilidade de resultados, porém precisa ser aprimorada, pois não analisa com precisão vozes com disfonias mais intensas e com ondas caóticas. Assim, elaborar medidas que proporcionem conhecimentos confiáveis em relação à função vocal resulta de uma necessidade antiga dentro desta linha de pesquisa e atuação clínica. Neste contexto, o uso da inteligência artificial, como as redes neurais artificiais, indica ser uma abordagem promissora. Objetivo: Validar um sistema automático utilizando redes neurais artificiais para a avaliação de vozes rugosas e soprosas. Materiais e métodos: Foram selecionadas 150 vozes, desde neutras até com presença em grau intenso de rugosidade e/ou soprosidade, do banco de dados da Clínica de Fonoaudiologia da Faculdade de Odontologia de Bauru (FOB/USP). Dessas vozes, 23 foram excluídas por não responderem aos critérios de inclusão na amostra, assim utilizaram-se 123 vozes. Procedimentos: avaliação perceptivo-auditiva pela escala visual analógica de 100 mm e pela escala numérica de quatro pontos; extração de características do sinal de voz por meio da Transformada Wavelet Packet e dos parâmetros acústicos: jitter, shimmer, amplitude da derivada e amplitude do pitch; e validação do classificador por meio da parametrização, treino, teste e avaliação das redes neurais artificiais. Resultados: Na avaliação perceptivo-auditiva encontrou-se, por meio do teste Coeficiente de Correlação Intraclasse (CCI), concordâncias inter e intrajuiz excelentes, com p = 0,85 na concordância interjuízes e p variando de 0,87 a 0,93 nas concordâncias intrajuiz. Em relação ao desempenho da rede neural artificial, na discriminação da soprosidade e da rugosidade e dos seus respectivos graus, encontrou-se o melhor desempenho para a soprosidade no subconjunto composto pelo jitter, amplitude do pitch e frequência fundamental, no qual obteve-se taxa de acerto de 74%, concordância excelente com a avaliação perceptivo-auditiva da escala visual analógica (0,80 no CCI) e erro médio de 9 mm. Para a rugosidade, o melhor subconjunto foi composto pela Transformada Wavelet Packet com 1 nível de decomposição, jitter, shimmer, amplitude do pitch e frequência fundamental, no qual obteve-se 73% de acerto, concordância excelente (0,84 no CCI), e erro médio de 10 mm. Conclusão: O uso da inteligência artificial baseado em redes neurais artificiais na identificação, e graduação da rugosidade e da soprosidade, apresentou confiabilidade excelente (CCI > 0,80), com resultados semelhantes a concordância interjuízes. Dessa forma, a rede neural artificial revela-se como uma metodologia promissora de avaliação vocal, tendo sua maior vantagem a objetividade na avaliação.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fundamentos: El Physician Readiness to Manage Intimate Partner Violence Survey (PREMIS) es uno de los cuestionarios más completos en el contexto internacional para la valoración de la capacidad de respuesta frente a la violencia del compañero íntimo por los profesionales de Atención Primaria de Salud. El objetivo de este estudio fue determinar la fiabilidad, consistencia interna y validez de constructo de la versión española de este cuestionario. Métodos: Tras la traducción, retrotraducción y valoración de la validez de contenido del cuestionario, se distribuyeron en una muestra de 200 profesionales de medicina y enfermería de 15 centros de atención primaria de 4 Comunidades Autónomas en 2013 (Comunidad Valenciana, Castilla León, Murcia y Cantabria). Se calcularon los coeficientes alfa de Cronbach, los de correlación intraclase y rho de Spearman (test-retest). Resultados: la versión española del PREMIS incluyó 64 ítems. El coeficiente α de Cronbach fue superior a 0,7 o muy cercano a ese valor en la mayoría de los índices. Se obtuvo un coeficiente de correlación intraclase de 0,87 y un coeficiente de Spearman de 0,67 que muestran una fiabilidad alta. Todas las correlaciones observadas para la escala de opiniones, la única tratada como estructura factorial en el cuestionario PREMIS, fueron superiores a 0,30. Conclusiones: el PREMIS en español obtuvo una buena validez interna, alta fiabilidad y capacidad predictiva de las prácticas auto-referidas por médicos(as) y enfermeros(as) frente a casos de violencia del compañero íntimo en centros de atención primaria.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

To generate innovation in Brazil becomes a high level priority in the last two decades. Innovation, according to the Presidential speech, is the right way to conduce the nation towards the development of technological competitive capabilities through the high technology-based products and services. Although the nation has come a long way, Brazil has to face the challenge of overcoming obstacles in infrastructure conditions for innovation. This paper aims to describe the main conditions to manage innovation in Brazil. This work offers a quantitative analysis of the main factors that impact innovation. This is a documental research based on data collected from high reliability international sources complemented by a research field applied to a sample of technology–based firms located in São José dos Campos, Brazil. The results indicated that entrepreneurs deal with difficulties to develop managerial competences in order to manage the business growth while developing new products and services. The lack of qualified human resources to manage business in technological environment is also a matter.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In emergency situations, where time for blood transfusion is reduced, the O negative blood type (the universal donor) is administrated. However, sometimes even the universal donor can cause transfusion reactions that can be fatal to the patient. As commercial systems do not allow fast results and are not suitable for emergency situations, this paper presents the steps considered for the development and validation of a prototype, able to determine blood type compatibilities, even in emergency situations. Thus it is possible, using the developed system, to administer a compatible blood type, since the first blood unit transfused. In order to increase the system’s reliability, this prototype uses different approaches to classify blood types, the first of which is based on Decision Trees and the second one based on support vector machines. The features used to evaluate these classifiers are the standard deviation values, histogram, Histogram of Oriented Gradients and fast Fourier transform, computed on different regions of interest. The main characteristics of the presented prototype are small size, lightweight, easy transportation, ease of use, fast results, high reliability and low cost. These features are perfectly suited for emergency scenarios, where the prototype is expected to be used.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Using current software engineering technology, the robustness required for safety critical software is not assurable. However, different approaches are possible which can help to assure software robustness to some extent. For achieving high reliability software, methods should be adopted which avoid introducing faults (fault avoidance); then testing should be carried out to identify any faults which persist (error removal). Finally, techniques should be used which allow any undetected faults to be tolerated (fault tolerance). The verification of correctness in system design specification and performance analysis of the model, are the basic issues in concurrent systems. In this context, modeling distributed concurrent software is one of the most important activities in the software life cycle, and communication analysis is a primary consideration to achieve reliability and safety. By and large fault avoidance requires human analysis which is error prone; by reducing human involvement in the tedious aspect of modelling and analysis of the software it is hoped that fewer faults will persist into its implementation in the real-time environment. The Occam language supports concurrent programming and is a language where interprocess interaction takes place by communications. This may lead to deadlock due to communication failure. Proper systematic methods must be adopted in the design of concurrent software for distributed computing systems if the communication structure is to be free of pathologies, such as deadlock. The objective of this thesis is to provide a design environment which ensures that processes are free from deadlock. A software tool was designed and used to facilitate the production of fault-tolerant software for distributed concurrent systems. Where Occam is used as a design language then state space methods, such as Petri-nets, can be used in analysis and simulation to determine the dynamic behaviour of the software, and to identify structures which may be prone to deadlock so that they may be eliminated from the design before the program is ever run. This design software tool consists of two parts. One takes an input program and translates it into a mathematical model (Petri-net), which is used for modeling and analysis of the concurrent software. The second part is the Petri-net simulator that takes the translated program as its input and starts simulation to generate the reachability tree. The tree identifies `deadlock potential' which the user can explore further. Finally, the software tool has been applied to a number of Occam programs. Two examples were taken to show how the tool works in the early design phase for fault prevention before the program is ever run.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The development of ultra-long (UL) cavity (hundreds of meters to several kilometres) mode-locked fibre lasers for the generation of high-energy light pulses with relatively low (sub-megahertz) repetition rates has emerged as a new rapidly advancing area of laser physics. The first demonstration of high pulse energy laser of this type was followed by a number of publications from many research groups on long-cavity Ytterbium and Erbium lasers featuring a variety of configurations with rather different mode-locked operations. The substantial interest to this new approach is stimulated both by non-trivial underlying physics and by the potential of high pulse energy laser sources with unique parameters for a range of applications in industry, bio-medicine, metrology and telecommunications. It is well known, that pulse generation regimes in mode-locked fibre lasers are determined by the intra-cavity balance between the effects of dispersion and non-linearity, and the processes of energy attenuation and amplification. The highest per-pulse energy has been achieved in normal-dispersion UL fibre lasers mode-locked through nonlinear polarization evolution (NPE) for self-modelocking operation. In such lasers are generated the so-called dissipative optical solitons. The uncompensated net normal dispersion in long-cavity resonatorsusually leads to very high chirp and, consequently, to a relatively long duration of generated pulses. This thesis presents the results of research Er-doped ultra-long (more than 1 km cavity length) fibre lasers mode-locked based on NPE. The self-mode-locked erbium-based 3.5-km-long all-fiber laser with the 1.7 µJ pulse energy at a wavelength of 1.55 µm was developed as a part of this research. It has resulted in direct generation of short laser pulses with an ultralow repetition rate of 35.1 kHz. The laser cavity has net normal-dispersion and has been fabricated from commercially-available telecom fibers and optical-fiber elements. Its unconventional linear-ring design with compensation for polarization instability ensures high reliability of the self-mode-locking operation, despite the use of a non polarization-maintaining fibers. The single pulse generation regime in all-fibre erbium mode-locking laser based on NPE with a record cavity length of 25 km was demonstrated. Modelocked lasers with such a long cavity have never been studied before. Our result shows a feasibility of stable mode-locked operation even for an ultra-long cavity length. A new design of fibre laser cavity – “y-configuration”, that offers a range of new functionalities for optimization and stabilization of mode-locked lasing regimes was proposed. This novel cavity configuration has been successfully implemented into a long-cavity normal-dispersion self-mode-locked Er-fibre laser. In particular, it features compensation for polarization instability, suppression of ASE, reduction of pulse duration, prevention of in-cavity wave breaking, and stabilization of the lasing wavelength. This laser along with a specially designed double-pass EDFA have allowed us to demonstrate anenvironmentally stable all-fibre laser system able to deliver sub-nanosecond high-energy pulses with low level of ASE noise.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper introduces a joint load balancing and hotspot mitigation protocol for mobile ad-hoc network (MANET) termed by us as 'load_energy balance + hotspot mitigation protocol (LEB+HM)'. We argue that although ad-hoc wireless networks have limited network resources - bandwidth and power, prone to frequent link/node failures and have high security risk; existing ad hoc routing protocols do not put emphasis on maintaining robust link/node, efficient use of network resources and on maintaining the security of the network. Typical route selection metrics used by existing ad hoc routing protocols are shortest hop, shortest delay, and loop avoidance. These routing philosophy have the tendency to cause traffic concentration on certain regions or nodes, leading to heavy contention, congestion and resource exhaustion which in turn may result in increased end-to-end delay, packet loss and faster battery power depletion, degrading the overall performance of the network. Also in most existing on-demand ad hoc routing protocols intermediate nodes are allowed to send route reply RREP to source in response to a route request RREQ. In such situation a malicious node can send a false optimal route to the source so that data packets sent will be directed to or through it, and tamper with them as wish. It is therefore desirable to adopt routing schemes which can dynamically disperse traffic load, able to detect and remove any possible bottlenecks and provide some form of security to the network. In this paper we propose a combine adaptive load_energy balancing and hotspot mitigation scheme that aims at evenly distributing network traffic load and energy, mitigate against any possible occurrence of hotspot and provide some form of security to the network. This combine approach is expected to yield high reliability, availability and robustness, that best suits any dynamic and scalable ad hoc network environment. Dynamic source routing (DSR) was use as our underlying protocol for the implementation of our algorithm. Simulation comparison of our protocol to that of original DSR shows that our protocol has reduced node/link failure, even distribution of battery energy, and better network service efficiency.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

During medical emergencies, the ability to communicate the state and position of injured individuals is essential. In critical situations or crowd aggregations, this may result difficult or even impossible due to the inaccuracy of verbal communication, the lack of precise localization for the medical events, and/or the failure/congestion of infrastructure-based communication networks. In such a scenario, a temporary (ad hoc) wireless network for disseminating medical alarms to the closest hospital, or medical field personnel, can be usefully employed to overcome the mentioned limitations. This is particularly true if the ad hoc network relies on the mobile phones that people normally carry, since they are automatically distributed where the communication needs are. Nevertheless, the feasibility and possible implications of such a network for medical alarm dissemination need to be analysed. To this aim, this paper presents a study on the feasibility of medical alarm dissemination through mobile phones in an urban environment, based on realistic people mobility. The results showed the dependence between the medical alarm delivery rates and both people and hospitals density. With reference to the considered urban scenario, the time needed to delivery medical alarms to the neighbour hospital with high reliability is in the order of minutes, thus revealing the practicability of the reported network for medical alarm dissemination. © 2013 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents the development and experimental validation of a novel angular velocity observer-based field-oriented control algorithm for a promising low-cost brushless doubly fed reluctance generator (BDFRG) in wind power applications. The BDFRG has been receiving increasing attention because of the use of partially rated power electronics, the high reliability of brushless design, and competitive performance to its popular slip-ring counterpart, the doubly fed induction generator. The controller viability has been demonstrated on a BDFRG laboratory test facility for emulation of variable speed and loading conditions of wind turbines or pump drives.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Online learning systems (OLS) have become center stage for corporations and educational institutions as a competitive tool in the knowledge economy. The satisfaction construct has received extensive coverage in information systems literature as an indicator of effectiveness but has been criticized for lack of validity; yet, the value construct has been largely ignored, although it has a long history in psychology, sociology, and behavioral science. The purpose of this dissertation is to investigate the value and satisfaction constructs in the context of OLS, and their perceived by learners relationship for implied effectiveness of OLS. ^ First, a qualitative phase is employed to gather OLS values from learners' focus groups, followed by a pilot phase to refine a proposed instrument, and a main phase to validate the survey. Responses were received from 75 students in four focus groups, 141 in the pilot, and 207 the main survey. Extensive data cleaning and exploratory factor analysis were done to identify factors of learners' perceived value and satisfaction of OLS. Then, Value-Satisfaction grids and the Learners' Value Index of Satisfaction (LeVIS) were developed as benchmarking tools of OLS. Moreover, Multicriteria Decision Analysis (MCDA) techniques were employed to impute value from satisfaction scores in order to reduce survey response time. ^ The results provided four satisfaction and four value factors with high reliability (Cronbach's α). Moreover, value and satisfaction were found to have low linear and nonlinear correlations, indicating that they are two distinct uncorrelated constructs. This is consistent with the literature. Value-Satisfaction grids and the LeVIS index indicated relatively high effectiveness for technology and support characteristics, relatively low effectiveness for professor's characteristics, while course and learner characteristics indicated average effectiveness. ^ The main contributions of this study include identifying, defining, and articulating the relationship between value and satisfaction constructs as assessment of users' implied IS effectiveness, as well as assessing the accuracy of MCDA procedures to predict value scores, thus reducing by half the survey questionnaire size. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The tragic events of September 11th ushered a new era of unprecedented challenges. Our nation has to be protected from the alarming threats of adversaries. These threats exploit the nation's critical infrastructures affecting all sectors of the economy. There is the need for pervasive monitoring and decentralized control of the nation's critical infrastructures. The communications needs of monitoring and control of critical infrastructures was traditionally catered for by wired communication systems. These technologies ensured high reliability and bandwidth but are however very expensive, inflexible and do not support mobility and pervasive monitoring. The communication protocols are Ethernet-based that used contention access protocols which results in high unsuccessful transmission and delay. An emerging class of wireless networks, named embedded wireless sensor and actuator networks has potential benefits for real-time monitoring and control of critical infrastructures. The use of embedded wireless networks for monitoring and control of critical infrastructures requires secure, reliable and timely exchange of information among controllers, distributed sensors and actuators. The exchange of information is over shared wireless media. However, wireless media is highly unpredictable due to path loss, shadow fading and ambient noise. Monitoring and control applications have stringent requirements on reliability, delay and security. The primary issue addressed in this dissertation is the impact of wireless media in harsh industrial environment on the reliable and timely delivery of critical data. In the first part of the dissertation, a combined networking and information theoretic approach was adopted to determine the transmit power required to maintain a minimum wireless channel capacity for reliable data transmission. The second part described a channel-aware scheduling scheme that ensured efficient utilization of the wireless link and guaranteed delay. Various analytical evaluations and simulations are used to evaluate and validate the feasibility of the methodologies and demonstrate that the protocols achieved reliable and real-time data delivery in wireless industrial networks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Inverters play key roles in connecting sustainable energy (SE) sources to the local loads and the ac grid. Although there has been a rapid expansion in the use of renewable sources in recent years, fundamental research, on the design of inverters that are specialized for use in these systems, is still needed. Recent advances in power electronics have led to proposing new topologies and switching patterns for single-stage power conversion, which are appropriate for SE sources and energy storage devices. The current source inverter (CSI) topology, along with a newly proposed switching pattern, is capable of converting the low dc voltage to the line ac in only one stage. Simple implementation and high reliability, together with the potential advantages of higher efficiency and lower cost, turns the so-called, single-stage boost inverter (SSBI), into a viable competitor to the existing SE-based power conversion technologies.^ The dynamic model is one of the most essential requirements for performance analysis and control design of any engineering system. Thus, in order to have satisfactory operation, it is necessary to derive a dynamic model for the SSBI system. However, because of the switching behavior and nonlinear elements involved, analysis of the SSBI is a complicated task.^ This research applies the state-space averaging technique to the SSBI to develop the state-space-averaged model of the SSBI under stand-alone and grid-connected modes of operation. Then, a small-signal model is derived by means of the perturbation and linearization method. An experimental hardware set-up, including a laboratory-scaled prototype SSBI, is built and the validity of the obtained models is verified through simulation and experiments. Finally, an eigenvalue sensitivity analysis is performed to investigate the stability and dynamic behavior of the SSBI system over a typical range of operation. ^