936 resultados para Reliability in automation
Resumo:
Objective: To determine the psychometric properties of two scales designed to examine attitudes regarding palliative care: Comfort Scale in Palliative Care (CSPC, Pereira et al.) and Tanatophobia Scale (TS, Merrill et al.)Method: Seventy-seven students who completed an online course on psychosocial aspects of palliative care offered by the Latin American Association of Palliative Care participated in the study. They also completed the scales before and after the course. Construct validity and reliability of the CSPC and the TS were assessed using a Principal Components Analysis, internal reliability coefficient and test-retest reliability. Further, comparative statistics between the pre-course and post-course results were obtained in order to determine changes in attitudes.Results: The Principal Components Analysis showed satisfactory fit to the data. 3 components were extracted: two for the CSPC and one for the TS, which explained 55.37% of the variance. Internal consistency coefficients were satisfactory in all cases and Cronbach´s Alphas were satisfactory for all the scales, particularly for the CSPC. Test-retest reliability in t1 and t2 was found to be non significant, indicating that measures were not related in time. Regarding pre-course/post-course comparisons, significant changes in comfort assisting patients (p = 0.004) and comfort assisting families (p = 0.001) following the course were identified, but changes in thanatophobia were non significant (p > 0.05).Conclusions: both scales are valid and reliable. Attitudes regarding the practice of palliative care and how they change, particularly regarding psychosocial issues, can be accurately measured using the examined scales.
Resumo:
Students reflect more on their learning in course subjects when they participate in managing their teaching–learning environment. As a form of guided participation, peer assessment serves the following purposes: (a) it improves the student’s understanding of previously established learning objectives; (b) it is a powerful metacognitive tool; (c) it transfers to the student part of the responsibility for assessing learning, which means deciding which learning activities are important and choosing the degree of effort a course subject will require; (d) it emphasizes the collective aspect of the nature of knowledge; and (e) the educational benefits derived from peer assessment clearly justify the efforts required to implement activities. This paper reports on the relative merits of a learning portfolio compiled during fine arts-related studies in which peer assessment played an important role. The researchers analyzed the student work load and the final marks students received for compulsory art subjects. They conclude that the use of a closed learning portfolio with a well-structured, sequential and analytical design can have a positive effect on student learning and that, although implementing peer assessment may be complex and students need to become familiar with it, its use is not only feasible but recommendable.
Resumo:
This Masters Degree dissertation seeks to make a comparative study of internal air temperature data, simulated through the thermal computer application DesignBuilder 1.2, and data registered in loco through HOBO® Temp Data Logger, in a Social Housing Prototype (HIS), located at the Central Campus of the Federal University of Rio Grande do Norte UFRN. The prototype was designed and built seeking strategies of thermal comfort recommended for the local climate where the study was carried out, and built with panels of cellular concrete by Construtora DoisA, a collaborator of research project REPESC Rede de Pesquisa em Eficiência Energética de Sistemas Construtivos (Research Network on Energy Efficiency of Construction Systems), an integral part of Habitare program. The methodology employed carefully examined the problem, reviewed the bibliography, analyzing the major aspects related to computer simulations for thermal performance of buildings, such as climate characterization of the region under study and users thermal comfort demands. The DesignBuilder 1.2 computer application was used as a simulation tool, and theoretical alterations were carried out in the prototype, then they were compared with the parameters of thermal comfort adopted, based on the area s current technical literature. Analyses of the comparative studies were performed through graphical outputs for a better understanding of air temperature amplitudes and thermal comfort conditions. The data used for the characterization of external air temperature were obtained from the Test Reference Year (TRY), defined for the study area (Natal-RN). Thus the author also performed comparative studies for TRY data registered in the years 2006, 2007 and 2008, at weather station Davis Precision Station, located at the Instituto Nacional de Pesquisas Espaciais INPE-CRN (National Institute of Space Research), in a neighboring area of UFRN s Central Campus. The conclusions observed from the comparative studies performed among computer simulations, and the local records obtained from the studied prototype, point out that the simulations performed in naturally ventilated buildings is quite a complex task, due to the applications limitations, mainly owed to the complexity of air flow phenomena, the influence of comfort conditions in the surrounding areas and climate records. Lastly, regarding the use of the application DesignBuilder 1.2 in the present study, one may conclude that it is a good tool for computer simulations. However, it needs some adjustments to improve reliability in its use. There is a need for continued research, considering the dedication of users to the prototype, as well as the thermal charges of the equipment, in order to check sensitivity
Resumo:
In the last years there has been a clear evolution in the world of telecommunications, which goes from new services that need higher speeds and higher bandwidth, until a role of interactions between people and machines, named by Internet of Things (IoT). So, the only technology able to follow this growth is the optical communications. Currently the solution that enables to overcome the day-by-day needs, like collaborative job, audio and video communications and share of les is based on Gigabit-capable Passive Optical Network (G-PON) with the recently successor named Next Generation Passive Optical Network Phase 2 (NG-PON2). This technology is based on the multiplexing domain wavelength and due to its characteristics and performance becomes the more advantageous technology. A major focus of optical communications are Photonic Integrated Circuits (PICs). These can include various components into a single device, which simpli es the design of the optical system, reducing space and power consumption, and improves reliability. These characteristics make this type of devices useful for several applications, that justi es the investments in the development of the technology into a very high level of performance and reliability in terms of the building blocks. With the goal to develop the optical networks of future generations, this work presents the design and implementation of a PIC, which is intended to be a universal transceiver for applications for NG-PON2. The same PIC will be able to be used as an Optical Line Terminal (OLT) or an Optical Network Unit (ONU) and in both cases as transmitter and receiver. Initially a study is made of Passive Optical Network (PON) and its standards. Therefore it is done a theoretical overview that explores the materials used in the development and production of this PIC, which foundries are available, and focusing in SMART Photonics, the components used in the development of this chip. For the conceptualization of the project di erent architectures are designed and part of the laser cavity is simulated using Aspic™. Through the analysis of advantages and disadvantages of each one, it is chosen the best to be used in the implementation. Moreover, the architecture of the transceiver is simulated block by block through the VPItransmissionMaker™ and it is demonstrated its operating principle. Finally it is presented the PIC implementation.
Resumo:
Mestrado em Contabilidade
Resumo:
Recent developments in automation, robotics and artificial intelligence have given a push to a wider usage of these technologies in recent years, and nowadays, driverless transport systems are already state-of-the-art on certain legs of transportation. This has given a push for the maritime industry to join the advancement. The case organisation, AAWA initiative, is a joint industry-academia research consortium with the objective of developing readiness for the first commercial autonomous solutions, exploiting state-of-the-art autonomous and remote technology. The initiative develops both autonomous and remote operation technology for navigation, machinery, and all on-board operating systems. The aim of this study is to develop a model with which to estimate and forecast the operational costs, and thus enable comparisons between manned and autonomous cargo vessels. The building process of the model is also described and discussed. Furthermore, the model’s aim is to track and identify the critical success factors of the chosen ship design, and to enable monitoring and tracking of the incurred operational costs as the life cycle of the vessel progresses. The study adopts the constructive research approach, as the aim is to develop a construct to meet the needs of a case organisation. Data has been collected through discussions and meeting with consortium members and researchers, as well as through written and internal communications material. The model itself is built using activity-based life cycle costing, which enables both realistic cost estimation and forecasting, as well as the identification of critical success factors due to the process-orientation adopted from activity-based costing and the statistical nature of Monte Carlo simulation techniques. As the model was able to meet the multiple aims set for it, and the case organisation was satisfied with it, it could be argued that activity-based life cycle costing is the method with which to conduct cost estimation and forecasting in the case of autonomous cargo vessels. The model was able to perform the cost analysis and forecasting, as well as to trace the critical success factors. Later on, it also enabled, albeit hypothetically, monitoring and tracking of the incurred costs. By collecting costs this way, it was argued that the activity-based LCC model is able facilitate learning from and continuous improvement of the autonomous vessel. As with the building process of the model, an individual approach was chosen, while still using the implementation and model building steps presented in existing literature. This was due to two factors: the nature of the model and – perhaps even more importantly – the nature of the case organisation. Furthermore, the loosely organised network structure means that knowing the case organisation and its aims is of great importance when conducting a constructive research.
Resumo:
Certains philosophes affirment que les relations causales sont fondées sur les lois de la nature. Cette conception cadre mal avec la réalité des sciences biomédicales et des sciences humaines. Pour se rapprocher de la pratique réelle des diverses sciences, James Woodward propose une conception de la causalité et de l’explication causale fondée sur une relation beaucoup moins exigeante que celle de loi de la nature, qu’il appelle l’invariance. Le but de ce mémoire est de présenter le concept d’invariance et les autres concepts causaux qui s’y rattachent et, d’identifier certaines difficultés, dans le but de cerner l’usage approprié de cette famille de concepts. La conception causale de Woodward suppose que le but de la recherche des causes est pratique plutôt que simplement épistémique : il s’agit pour les agents de s’appuyer sur les causes pour modifier les phénomènes. Cette conception est également non-réductive; elle utilise des contrefactuels et reflète les méthodes expérimentales des diverses sciences. La cohérence de cette conception avec les généralisations causales réelles des sciences fait en sorte qu’elle abandonne l’objectif d’universalité rattaché à la notion de loi de la nature, en faveur d’un objectif de fiabilité temporaire. De plus, comme le critère d’invariance est peu exigeant, d’autres critères doivent lui être ajoutés pour identifier, parmi les relations causales (c’est-à-dire invariantes), les relations les plus susceptibles d’être employées pour modifier les phénomènes de façon fiable.
Resumo:
Polymer aluminum electrolytic capacitors were introduced to provide an alternative to liquid electrolytic capacitors. Polymer electrolytic capacitor electric parameters of capacitance and ESR are less temperature dependent than those of liquid aluminum electrolytic capacitors. Furthermore, the electrical conductivity of the polymer used in these capacitors (poly-3,4ethylenedioxithiophene) is orders of magnitude higher than the electrolytes used in liquid aluminum electrolytic capacitors, resulting in capacitors with much lower equivalent series resistance which are suitable for use in high ripple-current applications. The presence of the moisture-sensitive polymer PEDOT introduces concerns on the reliability of polymer aluminum capacitors in high humidity conditions. Highly accelerated stress testing (or HAST) (110ºC, 85% relative humidity) of polymer aluminum capacitors in which the parts were subjected to unbiased HAST conditions for 700 hours was done to understand the design factors that contribute to the susceptibility to degradation of a polymer aluminum electrolytic capacitor exposed to HAST conditions. A large scale study involving capacitors of different electrical ratings (2.5V – 16V, 100µF – 470 µF), mounting types (surface-mount and through-hole) and manufacturers (6 different manufacturers) was done to determine a relationship between package geometry and reliability in high temperature-humidity conditions. A Geometry-Based HAST test in which the part selection limited variations between capacitor samples to geometric differences only was done to analyze the effect of package geometry on humidity-driven degradation more closely. Raman spectroscopy, x-ray imaging, environmental scanning electron microscopy, and destructive analysis of the capacitors after HAST exposure was done to determine the failure mechanisms of polymer aluminum capacitors under high temperature-humidity conditions.
Resumo:
The performance, energy efficiency and cost improvements due to traditional technology scaling have begun to slow down and present diminishing returns. Underlying reasons for this trend include fundamental physical limits of transistor scaling, the growing significance of quantum effects as transistors shrink, and a growing mismatch between transistors and interconnects regarding size, speed and power. Continued Moore's Law scaling will not come from technology scaling alone, and must involve improvements to design tools and development of new disruptive technologies such as 3D integration. 3D integration presents potential improvements to interconnect power and delay by translating the routing problem into a third dimension, and facilitates transistor density scaling independent of technology node. Furthermore, 3D IC technology opens up a new architectural design space of heterogeneously-integrated high-bandwidth CPUs. Vertical integration promises to provide the CPU architectures of the future by integrating high performance processors with on-chip high-bandwidth memory systems and highly connected network-on-chip structures. Such techniques can overcome the well-known CPU performance bottlenecks referred to as memory and communication wall. However the promising improvements to performance and energy efficiency offered by 3D CPUs does not come without cost, both in the financial investments to develop the technology, and the increased complexity of design. Two main limitations to 3D IC technology have been heat removal and TSV reliability. Transistor stacking creates increases in power density, current density and thermal resistance in air cooled packages. Furthermore the technology introduces vertical through silicon vias (TSVs) that create new points of failure in the chip and require development of new BEOL technologies. Although these issues can be controlled to some extent using thermal-reliability aware physical and architectural 3D design techniques, high performance embedded cooling schemes, such as micro-fluidic (MF) cooling, are fundamentally necessary to unlock the true potential of 3D ICs. A new paradigm is being put forth which integrates the computational, electrical, physical, thermal and reliability views of a system. The unification of these diverse aspects of integrated circuits is called Co-Design. Independent design and optimization of each aspect leads to sub-optimal designs due to a lack of understanding of cross-domain interactions and their impacts on the feasibility region of the architectural design space. Co-Design enables optimization across layers with a multi-domain view and thus unlocks new high-performance and energy efficient configurations. Although the co-design paradigm is becoming increasingly necessary in all fields of IC design, it is even more critical in 3D ICs where, as we show, the inter-layer coupling and higher degree of connectivity between components exacerbates the interdependence between architectural parameters, physical design parameters and the multitude of metrics of interest to the designer (i.e. power, performance, temperature and reliability). In this dissertation we present a framework for multi-domain co-simulation and co-optimization of 3D CPU architectures with both air and MF cooling solutions. Finally we propose an approach for design space exploration and modeling within the new Co-Design paradigm, and discuss the possible avenues for improvement of this work in the future.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2016.
Resumo:
This Masters Degree dissertation seeks to make a comparative study of internal air temperature data, simulated through the thermal computer application DesignBuilder 1.2, and data registered in loco through HOBO® Temp Data Logger, in a Social Housing Prototype (HIS), located at the Central Campus of the Federal University of Rio Grande do Norte UFRN. The prototype was designed and built seeking strategies of thermal comfort recommended for the local climate where the study was carried out, and built with panels of cellular concrete by Construtora DoisA, a collaborator of research project REPESC Rede de Pesquisa em Eficiência Energética de Sistemas Construtivos (Research Network on Energy Efficiency of Construction Systems), an integral part of Habitare program. The methodology employed carefully examined the problem, reviewed the bibliography, analyzing the major aspects related to computer simulations for thermal performance of buildings, such as climate characterization of the region under study and users thermal comfort demands. The DesignBuilder 1.2 computer application was used as a simulation tool, and theoretical alterations were carried out in the prototype, then they were compared with the parameters of thermal comfort adopted, based on the area s current technical literature. Analyses of the comparative studies were performed through graphical outputs for a better understanding of air temperature amplitudes and thermal comfort conditions. The data used for the characterization of external air temperature were obtained from the Test Reference Year (TRY), defined for the study area (Natal-RN). Thus the author also performed comparative studies for TRY data registered in the years 2006, 2007 and 2008, at weather station Davis Precision Station, located at the Instituto Nacional de Pesquisas Espaciais INPE-CRN (National Institute of Space Research), in a neighboring area of UFRN s Central Campus. The conclusions observed from the comparative studies performed among computer simulations, and the local records obtained from the studied prototype, point out that the simulations performed in naturally ventilated buildings is quite a complex task, due to the applications limitations, mainly owed to the complexity of air flow phenomena, the influence of comfort conditions in the surrounding areas and climate records. Lastly, regarding the use of the application DesignBuilder 1.2 in the present study, one may conclude that it is a good tool for computer simulations. However, it needs some adjustments to improve reliability in its use. There is a need for continued research, considering the dedication of users to the prototype, as well as the thermal charges of the equipment, in order to check sensitivity
Resumo:
Objetivo: Evaluar las propiedades psicométricas de los instrumentos para la medición de la actividad física en adultos de 18-65 años con discapacidad física por lesión de médula espinal. Materiales y métodos: Revisión sistemática. Las bases de datos de Medline, Scopus, Web of Science y 19 revistas especializadas fueron consultadas durante once días entre abril de 2015 y febrero de 2016 para identificar estudios originales de validación, sin límite de tiempo y que estuvieran publicados en español, francés y/o inglés. La calidad metodológica de los instrumentos de medición se evaluó usando las diferentes cajas de propiedades de la lista COSMIN. Resultados: Se identificaron 9229 referencias, de las cuales sólo 12 cumplieron los criterios de inclusión, dando como resultado 13 instrumentos de medición. Se evaluaron seis propiedades psicométricas. La propiedad más común fue la confiabilidad, además se observó que la calidad metodológica de los estudios incluidos no representa los resultados de las propiedades psicométricas de los instrumentos de medición. La calidad metodológica de los instrumentos para la evaluación de la actividad física en población con lesión medular espinal es “baja” para propiedades como consistencia interna, error de medición, sensibilidad, validez de criterio (con excepción del WISCI II que tiene buena validez) y excelente para validez de contenido y fiabilidad. Conclusión: Se ha encontrado que instrumentos empleados hasta el presente en la medición de la actividad física en población con discapacidad física relacionada con lesión de médula espinal han sido creados para otros tipos de discapacidad y otros instrumentos deben ser validados en futuros estudios.
Resumo:
Introducción y objetivo: La escala de auto-reporte de la condición física (IFIS) “The International FItness Scale”, fue creada como parte del proyecto financiado por la unión europea HELENA Study “Healthy Lifestyle in Europe by Nutrition in Adolescence”. A la fecha, no se conoce ningún estudio que haya examinado el auto-reporte de la condición física en un contexto distinto al Europeo. Este trabajo evalúa por auto-reporte la condición física relacionada con la salud (CFRS) en una muestra de niños y adolescentes del distrito de Bogotá pertenecientes al grupo FUPRECOL. Materiales y Método: Estudio transversal en 1.922 escolares (54.3% mujeres). Se aplicó de manera auto-administrada la escala “IFIS”. Se midió el peso, talla, circunferencia de cintura y se calculó el índice de masa corporal (IMC) en kg/m2. La capacidad aeróbica, el índice general de fuerza (z-score fuerza prensil + z-score salto de longitud), la velocidad/agilidad y la flexibilidad fueron como indicadores objetivos de la CFRS objetiva y directa. Resultados: La muestra estuvo conformada por 1.922 escolares, de los cuales 1.045 fueron mujeres (54.3%) y 877 hombres (45.6%). El análisis ANOVA mostró que los varones tenían mayores valores de peso (p<0.003), estatura (p<0.001), CC (p<0.001), capacidad aeróbica (p<0.001), velocidad/agilidad (p<0.001) e índice general de fuerza (p<0.001), mientras que las mujeres presentaron exceso de peso por IMC (sobrepeso y obesidad). En el componente de condición física general, las puntuaciones más altas en la escala “IFIS” se encontraron en la categoría buena (40%), seguido de aceptable (34%), mientras que la puntuación más baja se encontró en la categoría muy mala/mala (6%). En población general, relaciones lineales fueron observadas entre el auto-reporte de la CFRS por la escala “IFIS” y la mayoría de los indicadores del fitness evaluado objetivamente. El análisis post-hoc ajustado por sexo, edad y etapa de maduración reveló que los escolares que acusaron mejores valores en la auto-percepción de los dominios del “IFIS”, presentaron mejor desempeño en los indicadores de CFRS objetivos. Conclusión: Este trabajo describe por primera vez en población Latina, que el auto-reporte con la escala “IFIS”, es un instrumento válido para evaluar la CFRS, y además posee una adecuada capacidad para clasificar la aptitud física en población escolar de Bogotá, Colombia. Esta escala se encuentra disponible para otros investigadores interesados en evaluar la condición física muscular en América Latina.
Resumo:
The present work proposes different approaches to extend the mathematical methods of supervisory energy management used in terrestrial environments to the maritime sector, that diverges in constraints, variables and disturbances. The aim is to find the optimal real-time solution that includes the minimization of a defined track time, while maintaining the classical energetic approach. Starting from analyzing and modelling the powertrain and boat dynamics, the energy economy problem formulation is done, following the mathematical principles behind the optimal control theory. Then, an adaptation aimed in finding a winning strategy for the Monaco Energy Boat Challenge endurance trial is performed via ECMS and A-ECMS control strategies, which lead to a more accurate knowledge of energy sources and boat’s behaviour. The simulations show that the algorithm accomplishes fuel economy and time optimization targets, but the latter adds huge tuning and calculation complexity. In order to assess a practical implementation on real hardware, the knowledge of the previous approaches has been translated into a rule-based algorithm, that let it be run on an embedded CPU. Finally, the algorithm has been tuned and tested in a real-world race scenario, showing promising results.
Resumo:
In this thesis, a tube-based Distributed Economic Predictive Control (DEPC) scheme is presented for a group of dynamically coupled linear subsystems. These subsystems are components of a large scale system and control inputs are computed based on optimizing a local economic objective. Each subsystem is interacting with its neighbors by sending its future reference trajectory, at each sampling time. It solves a local optimization problem in parallel, based on the received future reference trajectories of the other subsystems. To ensure recursive feasibility and a performance bound, each subsystem is constrained to not deviate too much from its communicated reference trajectory. This difference between the plan trajectory and the communicated one is interpreted as a disturbance on the local level. Then, to ensure the satisfaction of both state and input constraints, they are tightened by considering explicitly the effect of these local disturbances. The proposed approach averages over all possible disturbances, handles tightened state and input constraints, while satisfies the compatibility constraints to guarantee that the actual trajectory lies within a certain bound in the neighborhood of the reference one. Each subsystem is optimizing a local arbitrary economic objective function in parallel while considering a local terminal constraint to guarantee recursive feasibility. In this framework, economic performance guarantees for a tube-based distributed predictive control (DPC) scheme are developed rigorously. It is presented that the closed-loop nominal subsystem has a robust average performance bound locally which is no worse than that of a local robust steady state. Since a robust algorithm is applying on the states of the real (with disturbances) subsystems, this bound can be interpreted as an average performance result for the real closed-loop system. To this end, we present our outcomes on local and global performance, illustrated by a numerical example.