936 resultados para Reliability in automation


Relevância:

90.00% 90.00%

Publicador:

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This dissertation investigates high performance cooperative localization in wireless environments based on multi-node time-of-arrival (TOA) and direction-of-arrival (DOA) estimations in line-of-sight (LOS) and non-LOS (NLOS) scenarios. Here, two categories of nodes are assumed: base nodes (BNs) and target nodes (TNs). BNs are equipped with antenna arrays and capable of estimating TOA (range) and DOA (angle). TNs are equipped with Omni-directional antennas and communicate with BNs to allow BNs to localize TNs; thus, the proposed localization is maintained by BNs and TNs cooperation. First, a LOS localization method is proposed, which is based on semi-distributed multi-node TOA-DOA fusion. The proposed technique is applicable to mobile ad-hoc networks (MANETs). We assume LOS is available between BNs and TNs. One BN is selected as the reference BN, and other nodes are localized in the coordinates of the reference BN. Each BN can localize TNs located in its coverage area independently. In addition, a TN might be localized by multiple BNs. High performance localization is attainable via multi-node TOA-DOA fusion. The complexity of the semi-distributed multi-node TOA-DOA fusion is low because the total computational load is distributed across all BNs. To evaluate the localization accuracy of the proposed method, we compare the proposed method with global positioning system (GPS) aided TOA (DOA) fusion, which are applicable to MANETs. The comparison criterion is the localization circular error probability (CEP). The results confirm that the proposed method is suitable for moderate scale MANETs, while GPS-aided TOA fusion is suitable for large scale MANETs. Usually, TOA and DOA of TNs are periodically estimated by BNs. Thus, Kalman filter (KF) is integrated with multi-node TOA-DOA fusion to further improve its performance. The integration of KF and multi-node TOA-DOA fusion is compared with extended-KF (EKF) when it is applied to multiple TOA-DOA estimations made by multiple BNs. The comparison depicts that it is stable (no divergence takes place) and its accuracy is slightly lower than that of the EKF, if the EKF converges. However, the EKF may diverge while the integration of KF and multi-node TOA-DOA fusion does not; thus, the reliability of the proposed method is higher. In addition, the computational complexity of the integration of KF and multi-node TOA-DOA fusion is much lower than that of EKF. In wireless environments, LOS might be obstructed. This degrades the localization reliability. Antenna arrays installed at each BN is incorporated to allow each BN to identify NLOS scenarios independently. Here, a single BN measures the phase difference across two antenna elements using a synchronized bi-receiver system, and maps it into wireless channel’s K-factor. The larger K is, the more likely the channel would be a LOS one. Next, the K-factor is incorporated to identify NLOS scenarios. The performance of this system is characterized in terms of probability of LOS and NLOS identification. The latency of the method is small. Finally, a multi-node NLOS identification and localization method is proposed to improve localization reliability. In this case, multiple BNs engage in the process of NLOS identification, shared reflectors determination and localization, and NLOS TN localization. In NLOS scenarios, when there are three or more shared reflectors, those reflectors are localized via DOA fusion, and then a TN is localized via TOA fusion based on the localization of shared reflectors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECT: Ultrasound may be a reliable but simpler alternative to intraoperative MR imaging (iMR imaging) for tumor resection control. However, its reliability in the detection of tumor remnants has not been definitely proven. The aim of the study was to compare high-field iMR imaging (1.5 T) and high-resolution 2D ultrasound in terms of tumor resection control. METHODS: A prospective comparative study of 26 consecutive patients was performed. The following parameters were compared: the existence of tumor remnants after presumed radical removal and the quality of the images. Tumor remnants were categorized as: detectable with both imaging modalities or visible only with 1 modality. RESULTS: Tumor remnants were detected in 21 cases (80.8%) with iMR imaging. All large remnants were demonstrated with both modalities, and their image quality was good. Two-dimensional ultrasound was not as effective in detecting remnants<1 cm. Two remnants detected with iMR imaging were missed by ultrasound. In 2 cases suspicious signals visible only on ultrasound images were misinterpreted as remnants but turned out to be a blood clot and peritumoral parenchyma. The average time for acquisition of an ultrasound image was 2 minutes, whereas that for an iMR image was approximately 10 minutes. Neither modality resulted in any procedure-related complications or morbidity. CONCLUSIONS: Intraoperative MR imaging is more precise in detecting small tumor remnants than 2D ultrasound. Nevertheless, the latter may be used as a less expensive and less time-consuming alternative that provides almost real-time feedback information. Its accuracy is highest in case of more confined, deeply located remnants. In cases of more superficially located remnants, its role is more limited.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND The human waking EEG spectrum shows high heritability and stability and, despite maturational cortical changes, high test-retest reliability in children and teens. These phenomena have also been shown to be region specific. We examined the stability of the morphology of the wake EEG spectrum in children aged 11 to 13 years recorded over weekly intervals and assessed whether the waking EEG spectrum in children may also be trait-like. Three minutes of eyes open and three minutes of eyes closed waking EEG was recorded in 22 healthy children once a week for three consecutive weeks. Eyes open and closed EEG power density spectra were calculated for two central (C3LM and C4LM) and two occipital (O1LM and O2LM) derivations. A hierarchical cluster analysis was performed to determine whether the morphology of the waking EEG spectrum between 1 and 20 Hz is trait-like. We also examined the stability of the alpha peak using an ANOVA. RESULTS The morphology of the EEG spectrum recorded from central derivations was highly stable and unique to an individual (correctly classified in 85% of participants), while the EEG recorded from occipital derivations, while stable, was much less unique across individuals (correctly classified in 42% of participants). Furthermore, our analysis revealed an increase in alpha peak height concurrent with a decline in the frequency of the alpha peak across weeks for occipital derivations. No changes in either measure were observed in the central derivations. CONCLUSIONS Our results indicate that across weekly recordings, power spectra at central derivations exhibit more "trait-like" characteristics than occipital derivations. These results may be relevant for future studies searching for links between phenotypes, such as psychiatric diagnoses, and the underlying genes (i.e., endophenotypes) by suggesting that such studies should make use of more anterior rather than posterior EEG derivations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Mobile ad-hoc networks (MANETs) and wireless sensor networks (WSNs) have been attracting increasing attention for decades due to their broad civilian and military applications. Basically, a MANET or WSN is a network of nodes connected by wireless communication links. Due to the limited transmission range of the radio, many pairs of nodes in MANETs or WSNs may not be able to communicate directly, hence they need other intermediate nodes to forward packets for them. Routing in such types of networks is an important issue and it poses great challenges due to the dynamic nature of MANETs or WSNs. On the one hand, the open-air nature of wireless environments brings many difficulties when an efficient routing solution is required. The wireless channel is unreliable due to fading and interferences, which makes it impossible to maintain a quality path from a source node to a destination node. Additionally, node mobility aggravates network dynamics, which causes frequent topology changes and brings significant overheads for maintaining and recalculating paths. Furthermore, mobile devices and sensors are usually constrained by battery capacity, computing and communication resources, which impose limitations on the functionalities of routing protocols. On the other hand, the wireless medium possesses inherent unique characteristics, which can be exploited to enhance transmission reliability and routing performance. Opportunistic routing (OR) is one promising technique that takes advantage of the spatial diversity and broadcast nature of the wireless medium to improve packet forwarding reliability in multihop wireless communication. OR combats the unreliable wireless links by involving multiple neighboring nodes (forwarding candidates) to choose packet forwarders. In opportunistic routing, a source node does not require an end-to-end path to transmit packets. The packet forwarding decision is made hop-by-hop in a fully distributed fashion. Motivated by the deficiencies of existing opportunistic routing protocols in dynamic environments such as mobile ad-hoc networks or wireless sensor networks, this thesis proposes a novel context-aware adaptive opportunistic routing scheme. Our proposal selects packet forwarders by simultaneously exploiting multiple types of cross-layer context information of nodes and environments. Our approach significantly outperforms other routing protocols that rely solely on a single metric. The adaptivity feature of our proposal enables network nodes to adjust their behaviors at run-time according to network conditions. To accommodate the strict energy constraints in WSNs, this thesis integrates adaptive duty-cycling mechanism to opportunistic routing for wireless sensor nodes. Our approach dynamically adjusts the sleeping intervals of sensor nodes according to the monitored traffic load and the estimated energy consumption rate. Through the integration of duty cycling of sensor nodes and opportunistic routing, our protocol is able to provide a satisfactory balance between good routing performance and energy efficiency for WSNs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Need for cognition (NFC) reflects a relatively stable trait regarding the degree to which one enjoys and engages in cognitive endeavors. We examined whether the previously demonstrated one-dimensional structure of the German NFC Scale could be replicated in three samples of undergraduates and secondary school students. Moreover, we investigated the test-retest reliability of the German NFC Scale, which has not yet been tested. Further, we investigated whether the scale would be valid in a sample of secondary school students. Multigroup confirmatory factor analyses established the one-dimensional factor structure of the long form as well as the short form of the German NFC Scale for undergraduates (N = 559), students of academic track secondary schools (German Gymnasium; N = 555), and students of vocational track secondary schools (German Realschule; N = 486). The scale proved to have a high test-retest reliability in a university student sample (N = 43). For secondary school students, we again found a high test-retest reliability (N = 157), and also found the scale to be valid (N = 181).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Falling on the outstretched hand is a common trauma mechanism. In contrast to fractures of the distal radius, which usually are diagnosed on plain film radiographs, identifying wrist injuries requires further diagnostic methods, e.g., MRI or CT. This article provides a review of the use of MRI in the most common traumatic wrist injuries, including scaphoid fractures, TFCC lesions, and tears of the scapholunate ligament. Early and selective use of MRI as a further diagnostic method in cases of adequate clinical suspicion helps to initiate the correct treatment and, thus, prevents long-term arthrotic injuries and reduces unnecessary absence due to illness. MRI shows a high reliability in the diagnosis of scaphoid fractures and the America College of Radiology recommends MRI as method of choice after X-ray images have been made. In the diagnosis of ligament and discoid lesions, MR arthrography (MRA) using intraarticular contrast agent has considerably higher accuracy than i.v.-enhanced and especially unenhanced MRI.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: The aim of this study was to evaluate the validity and the inter- and intra-examiner reliability of panoramic-radiograph-driven findings of different maxillary sinus anatomic variations and pathologies, which had initially been prediagnosed by cone beam computed tomography (CBCT). Methods: After pairs of two-dimensional (2D) panoramic and three-dimensional (3D) CBCT images of patients having received treatment at the outpatient department had been screened, the predefinition of 54 selected maxillary sinus conditions was initially performed on CBCT images by two blinded consultants individually using a questionnaire that defined ten different clinically relevant findings. Using the identic questionnaire, these consultants performed the evaluation of the panoramic radiographs at a later time point. The results were analyzed for inter-imaging differences in the evaluation of the maxillary sinus between 2D and 3D imaging methods. Additionally, two resident groups (first year and last year of training) performed two diagnostic runs of the panoramic radiographs and results were analyzed for inter- and intra-observer reliability. Results: There is a moderate risk for false diagnosis of findings of the maxillary sinus if only panoramic radiography is used. Based on the ten predefined conditions, solely maxillary bone cysts penetrating into the sinus were frequently detected differently comparing 2D to 3D diagnostics. Additionally, on panoramic radiographs, the inter-observer comparison demonstrated that basal septa were significantly often rated differently and the intra-observer comparison showed a significant lack in reliability in detecting maxillary bone cysts penetrating into the sinus. Conclusions: Panoramic radiography provides the most information on the maxillary sinus, and it may be an adequate imaging method. However, particular findings of the maxillary sinus in panoramic imaging may be based on a rather examiner-dependent assessment. Therefore, a persistent and precise evaluation of specific conditions of the maxillary sinus may only be possible using CBCT because it provides additional information compared to panoramic radiography. This might be relevant for consecutive surgical procedures; consequently, we recommend CBCT if a precise preoperative evaluation is mandatory. However, higher radiation dose and costs of 3D imaging need to be considered. Keywords: Panoramic radiography; Cone beam computed tomography; Maxillary sinus; Inter-imaging method differences; Inter-examiner reliability; Intra-examiner reliability

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Physical activity is a key component of life-style modification process which helps to reduce the risk of developing chronic diseases. It is important to have accurate estimates of physical activity to identify sedentary populations where interventions might be helpful. The International Physical Activity Questionnaire (IPAQ) short version has been used to estimate physical activity in diverse populations. However, there is little literature depicting the use of the IPAQ short version in Mexican America population. This study addressed the predictive validity and test-retest reliability of the IPAQ short version in Mexican American adults. The analysis was performed on 97 participants enrolled in the Cameron County Hispanic Cohort. Individuals selected in this study were 18 years of age or older. The predictive validity was evaluated by studying the relationship between physical activity and biomarkers known to be correlated with physical activity, namely, TNF-α, Adiponectin, and HDL. Multiple linear regression analysis was performed to delineate predictive validity. To assess test-retest reliability, two IPAQ-short last seven days questionnaires were interviewer administered to the participants on the same day, approximately two hours apart. Test-Retest reliability of IPAQ was estimated by performing intraclass correlations between the readings at two different time periods. The study showed that the IPAQ – short version used in the above study had acceptable test-retest reliability in the Mexican American population. This study showed that the IPAQ – short version did not have acceptable predictive validity when looking at physical activity and TNF-α, Adiponectin, and HDL in this sample.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Loneliness is a pervasive, rather common experience in American culture, particularly notable among adolescents. However, the phenomenon is not well documented in the cross-cultural psychiatric literature. For psychiatric epidemiology to encompass a wide array of psychopathologic phenomena, it is important to develop useful measures to characterize and classify both non-clinical and clinical dysfunction in diverse subgroups and cultures.^ The goal of this research was to examine the cross-cultural reliability and construct validity of a scale designed to measure loneliness. The Roberts Loneliness Scale (RLS-8) was administered to 4,060 adolescents ages 10-19 years enrolled in high schools along either side of the Texas-Tamaulipas border region between the U.S. and Mexico. Data collected in 1988 from a study focusing on substance use and psychological distress among adolescents in these regions were used to examine the operating characteristics of the RLS-8. A sample stratified by nationality and language, age, gender, and grade was used for analysis.^ Results indicated that in general the RLS-8 has moderate reliability in the U.S. sample, but not in the Mexican sample. Validity analyses demonstrated that there was evidence for convergent validity of the RLS-8 in the U.S. sample, but none in the Mexican sample. Discriminant validity of the measures in neither sample could be established. Based on the factor structure of the RLS-8, two subscales were created and analyzed for construct validity. Evidence for convergent validity was established for both subscales in both national samples. However, the discriminant validity of the measure remains unsubstantiated in both national samples. Also, the dimensionality of the scale is unresolved.^ One primary goal for future cross-cultural research would be to develop and test better defined culture-specific models of loneliness within the two cultures. From such scientific endeavor, measures of loneliness can be developed or reconstructed to classify the phenomenon in the same manner across cultures. Since estimates of prevalence and incidence are contingent upon reliable and valid screening or diagnostic measures, this objective would serve as an important foundation for future psychiatric epidemiologic inquiry into loneliness. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose: The purpose of this study was to assess the healthcare information needs of decision-makers in a local US healthcare setting in efforts to promote the translation of knowledge into action. The focus was on the perceptions and preferences of decision-makers regarding usable information in making decisions as to identify strategies to maximize the contribution of healthcare findings to policy and practice. Methods: This study utilized a qualitative data collection and analysis strategy. Data was collected via open-ended key-informant interviews from a sample of 37 public and private-sector healthcare decision-makers in the Houston/Harris County safety net. The sample was comprised of high-level decision-makers, including legislators, executive managers, service providers, and healthcare funders. Decision-makers were asked to identify the types of information, the level of collaboration with outside agencies, useful attributes of information, and the sources, formats/styles, and modes of information preferred in making important decisions and the basis for their preferences. Results: Decision-makers report acquiring information, categorizing information as usable knowledge, and selecting information for use based on the application of four cross-cutting thought processes or cognitive frameworks. In order of apparent preference, these are time orientation, followed by information seeking directionality, selection of validation processes, and centrality of credibility/reliability. In applying the frameworks, decision-makers are influenced by numerous factors associated with their perceptions of the utility of information and the importance of collaboration with outside agencies in making decisions as well as professional and organizational characteristics. Conclusion: An approach based on the elucidated cognitive framework may be valuable in identifying the reported contextual determinants of information use by decision-makers in US healthcare settings. Such an approach can facilitate active producer/user collaborations and promote the production of mutually valued, comprehensible, and usable findings leading to sustainable knowledge translation efforts long-term.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Distributed real-time embedded systems are becoming increasingly important to society. More demands will be made on them and greater reliance will be placed on the delivery of their services. A relevant subset of them is high-integrity or hard real-time systems, where failure can cause loss of life, environmental harm, or significant financial loss. Additionally, the evolution of communication networks and paradigms as well as the necessity of demanding processing power and fault tolerance, motivated the interconnection between electronic devices; many of the communications have the possibility of transferring data at a high speed. The concept of distributed systems emerged as systems where different parts are executed on several nodes that interact with each other via a communication network. Java’s popularity, facilities and platform independence have made it an interesting language for the real-time and embedded community. This was the motivation for the development of RTSJ (Real-Time Specification for Java), which is a language extension intended to allow the development of real-time systems. The use of Java in the development of high-integrity systems requires strict development and testing techniques. However, RTJS includes a number of language features that are forbidden in such systems. In the context of the HIJA project, the HRTJ (Hard Real-Time Java) profile was developed to define a robust subset of the language that is amenable to static analysis for high-integrity system certification. Currently, a specification under the Java community process (JSR- 302) is being developed. Its purpose is to define those capabilities needed to create safety critical applications with Java technology called Safety Critical Java (SCJ). However, neither RTSJ nor its profiles provide facilities to develop distributed realtime applications. This is an important issue, as most of the current and future systems will be distributed. The Distributed RTSJ (DRTSJ) Expert Group was created under the Java community process (JSR-50) in order to define appropriate abstractions to overcome this problem. Currently there is no formal specification. The aim of this thesis is to develop a communication middleware that is suitable for the development of distributed hard real-time systems in Java, based on the integration between the RMI (Remote Method Invocation) model and the HRTJ profile. It has been designed and implemented keeping in mind the main requirements such as the predictability and reliability in the timing behavior and the resource usage. iThe design starts with the definition of a computational model which identifies among other things: the communication model, most appropriate underlying network protocols, the analysis model, and a subset of Java for hard real-time systems. In the design, the remote references are the basic means for building distributed applications which are associated with all non-functional parameters and resources needed to implement synchronous or asynchronous remote invocations with real-time attributes. The proposed middleware separates the resource allocation from the execution itself by defining two phases and a specific threading mechanism that guarantees a suitable timing behavior. It also includes mechanisms to monitor the functional and the timing behavior. It provides independence from network protocol defining a network interface and modules. The JRMP protocol was modified to include two phases, non-functional parameters, and message size optimizations. Although serialization is one of the fundamental operations to ensure proper data transmission, current implementations are not suitable for hard real-time systems and there are no alternatives. This thesis proposes a predictable serialization that introduces a new compiler to generate optimized code according to the computational model. The proposed solution has the advantage of allowing us to schedule the communications and to adjust the memory usage at compilation time. In order to validate the design and the implementation a demanding validation process was carried out with emphasis in the functional behavior, the memory usage, the processor usage (the end-to-end response time and the response time in each functional block) and the network usage (real consumption according to the calculated consumption). The results obtained in an industrial application developed by Thales Avionics (a Flight Management System) and in exhaustive tests show that the design and the prototype are reliable for industrial applications with strict timing requirements. Los sistemas empotrados y distribuidos de tiempo real son cada vez más importantes para la sociedad. Su demanda aumenta y cada vez más dependemos de los servicios que proporcionan. Los sistemas de alta integridad constituyen un subconjunto de gran importancia. Se caracterizan por que un fallo en su funcionamiento puede causar pérdida de vidas humanas, daños en el medio ambiente o cuantiosas pérdidas económicas. La necesidad de satisfacer requisitos temporales estrictos, hace más complejo su desarrollo. Mientras que los sistemas empotrados se sigan expandiendo en nuestra sociedad, es necesario garantizar un coste de desarrollo ajustado mediante el uso técnicas adecuadas en su diseño, mantenimiento y certificación. En concreto, se requiere una tecnología flexible e independiente del hardware. La evolución de las redes y paradigmas de comunicación, así como la necesidad de mayor potencia de cómputo y de tolerancia a fallos, ha motivado la interconexión de dispositivos electrónicos. Los mecanismos de comunicación permiten la transferencia de datos con alta velocidad de transmisión. En este contexto, el concepto de sistema distribuido ha emergido como sistemas donde sus componentes se ejecutan en varios nodos en paralelo y que interactúan entre ellos mediante redes de comunicaciones. Un concepto interesante son los sistemas de tiempo real neutrales respecto a la plataforma de ejecución. Se caracterizan por la falta de conocimiento de esta plataforma durante su diseño. Esta propiedad es relevante, por que conviene que se ejecuten en la mayor variedad de arquitecturas, tienen una vida media mayor de diez anos y el lugar ˜ donde se ejecutan puede variar. El lenguaje de programación Java es una buena base para el desarrollo de este tipo de sistemas. Por este motivo se ha creado RTSJ (Real-Time Specification for Java), que es una extensión del lenguaje para permitir el desarrollo de sistemas de tiempo real. Sin embargo, RTSJ no proporciona facilidades para el desarrollo de aplicaciones distribuidas de tiempo real. Es una limitación importante dado que la mayoría de los actuales y futuros sistemas serán distribuidos. El grupo DRTSJ (DistributedRTSJ) fue creado bajo el proceso de la comunidad de Java (JSR-50) con el fin de definir las abstracciones que aborden dicha limitación, pero en la actualidad aun no existe una especificacion formal. El objetivo de esta tesis es desarrollar un middleware de comunicaciones para el desarrollo de sistemas distribuidos de tiempo real en Java, basado en la integración entre el modelo de RMI (Remote Method Invocation) y el perfil HRTJ. Ha sido diseñado e implementado teniendo en cuenta los requisitos principales, como la predecibilidad y la confiabilidad del comportamiento temporal y el uso de recursos. El diseño parte de la definición de un modelo computacional el cual identifica entre otras cosas: el modelo de comunicaciones, los protocolos de red subyacentes más adecuados, el modelo de análisis, y un subconjunto de Java para sistemas de tiempo real crítico. En el diseño, las referencias remotas son el medio básico para construcción de aplicaciones distribuidas las cuales son asociadas a todos los parámetros no funcionales y los recursos necesarios para la ejecución de invocaciones remotas síncronas o asíncronas con atributos de tiempo real. El middleware propuesto separa la asignación de recursos de la propia ejecución definiendo dos fases y un mecanismo de hebras especifico que garantiza un comportamiento temporal adecuado. Además se ha incluido mecanismos para supervisar el comportamiento funcional y temporal. Se ha buscado independencia del protocolo de red definiendo una interfaz de red y módulos específicos. También se ha modificado el protocolo JRMP para incluir diferentes fases, parámetros no funcionales y optimizaciones de los tamaños de los mensajes. Aunque la serialización es una de las operaciones fundamentales para asegurar la adecuada transmisión de datos, las actuales implementaciones no son adecuadas para sistemas críticos y no hay alternativas. Este trabajo propone una serialización predecible que ha implicado el desarrollo de un nuevo compilador para la generación de código optimizado acorde al modelo computacional. La solución propuesta tiene la ventaja que en tiempo de compilación nos permite planificar las comunicaciones y ajustar el uso de memoria. Con el objetivo de validar el diseño e implementación se ha llevado a cabo un exigente proceso de validación con énfasis en: el comportamiento funcional, el uso de memoria, el uso del procesador (tiempo de respuesta de extremo a extremo y en cada uno de los bloques funcionales) y el uso de la red (consumo real conforme al estimado). Los buenos resultados obtenidos en una aplicación industrial desarrollada por Thales Avionics (un sistema de gestión de vuelo) y en las pruebas exhaustivas han demostrado que el diseño y el prototipo son fiables para aplicaciones industriales con estrictos requisitos temporales.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Strained fin is one of the techniques used to improve the devices as their size keeps reducing in new nanoscale nodes. In this paper, we use a predictive technology of 14 nm where pMOS mobility is significantly improved when those devices are built on top of long, uncut fins, while nMOS devices present the opposite behavior due to the combination of strains. We explore the possibility of boosting circuit performance in repetitive structures where long uncut fins can be exploited to increase fin strain impact. In particular, pMOS pass-gates are used in 6T complementary SRAM cells (CSRAM) with reinforced pull-ups. Those cells are simulated under process variability and compared to the regular SRAM. We show that when layout dependent effects are considered the CSRAM design provides 10% to 40% faster access time while keeping the same area, power, and stability than a regular 6T SRAM cell. The conclusions also apply to 8T SRAM cells. The CSRAM cell also presents increased reliability in technologies whose nMOS devices have more mismatch than pMOS transistors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Debido al gran incremento de datos digitales que ha tenido lugar en los últimos años, ha surgido un nuevo paradigma de computación paralela para el procesamiento eficiente de grandes volúmenes de datos. Muchos de los sistemas basados en este paradigma, también llamados sistemas de computación intensiva de datos, siguen el modelo de programación de Google MapReduce. La principal ventaja de los sistemas MapReduce es que se basan en la idea de enviar la computación donde residen los datos, tratando de proporcionar escalabilidad y eficiencia. En escenarios libres de fallo, estos sistemas generalmente logran buenos resultados. Sin embargo, la mayoría de escenarios donde se utilizan, se caracterizan por la existencia de fallos. Por tanto, estas plataformas suelen incorporar características de tolerancia a fallos y fiabilidad. Por otro lado, es reconocido que las mejoras en confiabilidad vienen asociadas a costes adicionales en recursos. Esto es razonable y los proveedores que ofrecen este tipo de infraestructuras son conscientes de ello. No obstante, no todos los enfoques proporcionan la misma solución de compromiso entre las capacidades de tolerancia a fallo (o de manera general, las capacidades de fiabilidad) y su coste. Esta tesis ha tratado la problemática de la coexistencia entre fiabilidad y eficiencia de los recursos en los sistemas basados en el paradigma MapReduce, a través de metodologías que introducen el mínimo coste, garantizando un nivel adecuado de fiabilidad. Para lograr esto, se ha propuesto: (i) la formalización de una abstracción de detección de fallos; (ii) una solución alternativa a los puntos únicos de fallo de estas plataformas, y, finalmente, (iii) un nuevo sistema de asignación de recursos basado en retroalimentación a nivel de contenedores. Estas contribuciones genéricas han sido evaluadas tomando como referencia la arquitectura Hadoop YARN, que, hoy en día, es la plataforma de referencia en la comunidad de los sistemas de computación intensiva de datos. En la tesis se demuestra cómo todas las contribuciones de la misma superan a Hadoop YARN tanto en fiabilidad como en eficiencia de los recursos utilizados. ABSTRACT Due to the increase of huge data volumes, a new parallel computing paradigm to process big data in an efficient way has arisen. Many of these systems, called dataintensive computing systems, follow the Google MapReduce programming model. The main advantage of these systems is based on the idea of sending the computation where the data resides, trying to provide scalability and efficiency. In failure-free scenarios, these frameworks usually achieve good results. However, these ones are not realistic scenarios. Consequently, these frameworks exhibit some fault tolerance and dependability techniques as built-in features. On the other hand, dependability improvements are known to imply additional resource costs. This is reasonable and providers offering these infrastructures are aware of this. Nevertheless, not all the approaches provide the same tradeoff between fault tolerant capabilities (or more generally, reliability capabilities) and cost. In this thesis, we have addressed the coexistence between reliability and resource efficiency in MapReduce-based systems, looking for methodologies that introduce the minimal cost and guarantee an appropriate level of reliability. In order to achieve this, we have proposed: (i) a formalization of a failure detector abstraction; (ii) an alternative solution to single points of failure of these frameworks, and finally (iii) a novel feedback-based resource allocation system at the container level. Finally, our generic contributions have been instantiated for the Hadoop YARN architecture, which is the state-of-the-art framework in the data-intensive computing systems community nowadays. The thesis demonstrates how all our approaches outperform Hadoop YARN in terms of reliability and resource efficiency.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Rationale and aims 'OTseeker' is an online database of randomized controlled trials (RCTs) and systematic reviews relevant to occupational therapy. RCTs are critically appraised and rated for quality using the 'PEDro' scale. We aimed to investigate the inter-rater reliability of the PEDro scale before and after revising rating guidelines. Methods In study 1, five raters scored 100 RCTs using the original PEDro scale guidelines. In study 2, two raters scored 40 different RCTs using revised guidelines. All RCTs were randomly selected from the OTseeker database. Reliability was calculated using Kappa and intraclass correlation coefficients [ICC (model 2,1)]. Results Inter-rater reliability was 'good to excellent' in the first study (Kappas >= 0.53; ICCs >= 0.71). After revising the rating guidelines, the reliability levels were equivalent or higher to those previously obtained (Kappas >= 0.53; ICCs >= 0.89), except for the item, 'groups similar at baseline', which still had moderate reliability (Kappa = 0.53). In study 2, two PEDro scale items, which had their definitions revised, 'less than 15% dropout' and 'point measures and variability', showed higher reliability. In both studies, the PEDro items with the lowest reliability were 'groups similar at baseline' (Kappas = 0.53), 'less than 15% dropout' (Kappas