422 resultados para Busbar sizing
Resumo:
The sizing of nursing human resources is an essential management tool to meet the needs of the patients and the institution. Regarding to the Intensive Care Unit, where the most critical patients are treated and the most advanced life-support equipments are used, requiring a high number of skilled workers, the use of specific indicators to measure the workload of the team becomes necessary. The Nursing Activities Score is a validated instrument for measuring nursing workload in the Intensive Care Unit that has demonstrated effectiveness. It is a cross-sectional study with the primary objective of assessing the workload of nursing staff in an adult Intensive Care Unit through the application of the Nursing Activities Score. The study was conducted in a private hospital specialized in the treatment of patients with cancer, which is located in the city of Natal (Rio Grande do Norte – Brazil). The study was approved by the Research Ethics Committee of the hospital (Protocol number 558.799; CAAE 24966013.7.0000.5293). For data collection, a form of sociodemographic characteristics of the patients was used; the Nursing Activities Score was used to identify the workload of nursing staff; and the instrument of Perroca, which classifies patients and provides data related to the their need for nursing care, was also used. The collected data were analyzed using a statistical package. The categorical variables were described by absolute and relative frequency, while the number by median and interquartile range. Considering the inferential approach, the Spearman test, the Wald chi-square, Kruskal Wallis and Mann-Whitney test were used. The statistically significant variables were those with p values <0.05. The evaluation of the overall averages of NAS, considering the first 15 days of hospitalization, was performed by the analysis of Generalized Estimating Equations (GEE), with adjust for the variable length of hospitalization. The sample consisted of 40 patients, in the period of June to August 2014. The results showed a mean age of 62,1 years (±23,4) with a female predominance (57,5%). The most frequent type of treatment was clinical (60,0%), observing an average stay of 6,9 days (±6,5). Considering the origin, most patients (35%) came from the Surgical Center. There was a mortality rate of 27,5%. 277 measures of NAS score and Perroca were performed, and the averages of 69,8% (±24,1) and 22,7% (±4.2) were obtained, respectively. There was an association between clinical outcome and value of the Nursing Activities Score in 24 hours (p <0.001), and between the degree of dependency of patients and nursing workload (rp 0,653, p<0,001). The achieved workload of the nursing staff, in the analyzed period, was presented high, showing that hospitalized patients required a high demand for care. These findings create subsidies for sizing of staff and allocation of human resources in the sector, in order to achieve greater safety and patient satisfaction as a result of intensive care, as well as an environment conducive to quality of life for the professionals
Resumo:
Tailings dams are structures that aims to retain the solid waste and water from mining processes. Its analysis and planning begins with searching of location for deployment, step on which to bind all kinds of variables that directly or indirectly influence the work, such as geological, hydrological, tectonic, topographic, geotechnical, environmental, social characteristics, evaluation security risks, among others. Thus, this paper aims to present a study on the most appropriate and secure type of busbar to design a layout structure of iron ore tailings, taking into account all the above mentioned variables. The case study involves the assessment of sites for location of dams of tailings disposal beneficiation of iron mine to be built in Bonito, in the municipality of Jucurutu in Seridó Potiguar. For site selection among alternatives, various aspects of the current state of the art were considered, one that causes the least environmental impact, low cost investment, adding value to the product and especially the safety of the implanted structure mitigates the concern about induced earthquakes as a result of liquefaction wastes somatized by dams in the region, as the tilling of Mina Bonito is located practically in the hydraulic basin dam Armando Ribeiro in environmental protection (APA). The methodology compares induced by dams in the semiarid region with the characteristics of the waste disposal and sterile seismicity, taking into account the enhancement of liquefaction by the action of seismicity in the Mina Bonito region. With the fulcrum in the methodology, we indicated the best busbar type for disposal of tailings from iron ore or combination of them, to be designed and built in semiarid particularly for Mina Bonito. Also presents a number of possible uses for the tailings and in engineering activities, which may cause processing to the common good.
Resumo:
The unbridled consumption of electronic equipment associated with fast immersion of new technologies on the market leads to the accelerated growth of electronic waste. Such waste mostly contains printed circuit boards in its structure. Printed circuit boards have many metals, including heavy metals, being highly toxic. Electronic waste is discarded improperly and indiscriminately, usually without any previous treatment and with other municipal waste, contaminating the environment and causing serious problems to human health. Beyond these metals, there are also precious metals and high value-added basis, that can be recovered and recycled, reducing the exploration of natural resources. Thus, due to the high growth potential and reuse of these waste treatment processes, characterization and separation were applied to the printed circuit boards. The printed circuit boards were subjected to physical treatments such as dismantling, crushing, sizing separation, magnetic separation and chemical treatments such as pyrolysis and leaching. Through characterization process (pyrolysis and leaching) the proportions of the components of the granulometric range were determined: 46,08% of metals; 23,32% of polymers and 30,60% of ceramics. It was also observed by particle size separation that metal components tend to concentrate in coarse fractions, while polymeric and ceramic components in fine fractions. From the magnetic separation process was obtained 12,08% of magnetic material and 82,33% of non-magnetic material.
Resumo:
Nanotechnology is a multidisciplinary science that is having a boom today, providing new products with attractive physicochemical properties for many applications. In agri/feed/food sector, nanotechnology offers great opportunities for obtaining products and innovative applications for agriculture and livestock, water treatment and the production, processing, storage and packaging of food. To this end, a wide variety of nanomaterials, ranging from metals and inorganic metal oxides to organic nanomaterials carrying bioactive ingredients are applied. This review shows an overview of current and future applications of nanotechnology in the food industry. Food additives and materials in contact with food are now the main applications, while it is expected that in the future are in the field of nano-encapsulated and nanocomposites in applications as novel foods, additives, biocides, pesticides and materials food contact.
Resumo:
Background
There is a growing body of evidence suggesting patients with life-limiting illness use medicines inappropriately and unnecessarily. In this context, the perspective of patients, their carers and the healthcare professionals responsible for prescribing and monitoring their medication is important for developing deprescribing strategies. The aim of this study was to explore the lived experience of patients, carers and healthcare professionals in the context of medication use in life-limiting illness.
MethodsIn-depth interviews, using a phenomenological approach: methods of transcendental phenomenology were used for the patient and carer interviews, while hermeneutic phenomenology was used for the healthcare professional interviews.
ResultsThe study highlighted that medication formed a significant part of a patient’s day-to-day routine; this was also apparent for their carers who took on an active role-as a gatekeeper of care-in managing medication. Patients described the experience of a point in which, in their disease journey, they placed less importance on taking certain medications; healthcare professionals also recognize this and refer it as a ‘transition’. This point appeared to occur when the patient became accepting of their illness and associated life expectancy. There was also willingness by patients, carers and healthcare professionals to review and alter the medication used by patients in the context of life-limiting illness.
ConclusionsThere is a need to develop deprescribing strategies for patients with life-limiting illness. Such strategies should seek to establish patient expectations, consider the timing of the discussion about ceasing treatment and encourage the involvement of other stakeholders in the decision-making progress.
Resumo:
Future power systems are expected to integrate large-scale stochastic and intermittent generation and load due to reduced use of fossil fuel resources, including renewable energy sources (RES) and electric vehicles (EV). Inclusion of such resources poses challenges for the dynamic stability of synchronous transmission and distribution networks, not least in terms of generation where system inertia may not be wholly governed by large-scale generation but displaced by small-scale and localised generation. Energy storage systems (ESS) can limit the impact of dispersed and distributed generation by offering supporting reserve while accommodating large-scale EV connection; the latter (load) also participating in storage provision. In this paper, a local energy storage system (LESS) is proposed. The structure, requirement and optimal sizing of the LESS are discussed. Three operating modes are detailed, including: 1) storage pack management; 2) normal operation; and 3) contingency operation. The proposed LESS scheme is evaluated using simulation studies based on data obtained from the Northern Ireland regional and residential network.
Resumo:
Energy auditing can be an important contribution for identification and assessment of energy conservation measures (ECMs) in buildings. Numerous tools and software have been developed, with varying degree of precision and complexity and different areas of use. This paper evaluates PHPP as a versatile, easy-to-use energy auditing tool and gives examples of how it has been compared to a dynamic simulation tool, within the EU-project iNSPiRe. PHPP is a monthly balance energy calculation tool based on EN13790. It is intended for assisting the design of Passive Houses and energy renovation projects and as guidance in the choice of appropriate ECMs. PHPP was compared against the transient simulation software TRNSYS for a single family house and a multi-family house. It should be mentioned that dynamic building simulations might strongly depend on the model assumptions and simplifications compared to reality, such as ideal heating or real heat emission system. Setting common boundary conditions for both PHPP and TRNSYS, the ideal heating and cooling loads and demands were compared on monthly and annual basis for seven European locations and buildings with different floor area, S/V ratio, U-values and glazed area of the external walls. The results show that PHPP can be used to assess the heating demand of single-zone buildings and the reduction of heating demand with ECMs with good precision. The estimation of cooling demand is also acceptable if an appropriate shading factor is applied in PHPP. In general, PHPP intentionally overestimates heating and cooling loads, to be on the safe side for system sizing. Overall, the agreement with TRNSYS is better in cases with higher quality of the envelope as in cold climates and for good energy standards. As an energy auditing tool intended for pre-design it is a good, versatile and easy-to-use alternative to more complex simulation tools.
Resumo:
Le Système Stockage de l’Énergie par Batterie ou Batterie de Stockage d’Énergie (BSE) offre de formidables atouts dans les domaines de la production, du transport, de la distribution et de la consommation d’énergie électrique. Cette technologie est notamment considérée par plusieurs opérateurs à travers le monde entier, comme un nouveau dispositif permettant d’injecter d’importantes quantités d’énergie renouvelable d’une part et d’autre part, en tant que composante essentielle aux grands réseaux électriques. De plus, d’énormes avantages peuvent être associés au déploiement de la technologie du BSE aussi bien dans les réseaux intelligents que pour la réduction de l’émission des gaz à effet de serre, la réduction des pertes marginales, l’alimentation de certains consommateurs en source d’énergie d’urgence, l’amélioration de la gestion de l’énergie, et l’accroissement de l’efficacité énergétique dans les réseaux. Cette présente thèse comprend trois étapes à savoir : l’Étape 1 - est relative à l’utilisation de la BSE en guise de réduction des pertes électriques ; l’Étape 2 - utilise la BSE comme élément de réserve tournante en vue de l’atténuation de la vulnérabilité du réseau ; et l’Étape 3 - introduit une nouvelle méthode d’amélioration des oscillations de fréquence par modulation de la puissance réactive, et l’utilisation de la BSE pour satisfaire la réserve primaire de fréquence. La première Étape, relative à l’utilisation de la BSE en vue de la réduction des pertes, est elle-même subdivisée en deux sous-étapes dont la première est consacrée à l’allocation optimale et le seconde, à l’utilisation optimale. Dans la première sous-étape, l’Algorithme génétique NSGA-II (Non-dominated Sorting Genetic Algorithm II) a été programmé dans CASIR, le Super-Ordinateur de l’IREQ, en tant qu’algorithme évolutionniste multiobjectifs, permettant d’extraire un ensemble de solutions pour un dimensionnement optimal et un emplacement adéquat des multiple unités de BSE, tout en minimisant les pertes de puissance, et en considérant en même temps la capacité totale des puissances des unités de BSE installées comme des fonctions objectives. La première sous-étape donne une réponse satisfaisante à l’allocation et résout aussi la question de la programmation/scheduling dans l’interconnexion du Québec. Dans le but de réaliser l’objectif de la seconde sous-étape, un certain nombre de solutions ont été retenues et développées/implantées durant un intervalle de temps d’une année, tout en tenant compte des paramètres (heure, capacité, rendement/efficacité, facteur de puissance) associés aux cycles de charge et de décharge de la BSE, alors que la réduction des pertes marginales et l’efficacité énergétique constituent les principaux objectifs. Quant à la seconde Étape, un nouvel indice de vulnérabilité a été introduit, formalisé et étudié ; indice qui est bien adapté aux réseaux modernes équipés de BES. L’algorithme génétique NSGA-II est de nouveau exécuté (ré-exécuté) alors que la minimisation de l’indice de vulnérabilité proposé et l’efficacité énergétique représentent les principaux objectifs. Les résultats obtenus prouvent que l’utilisation de la BSE peut, dans certains cas, éviter des pannes majeures du réseau. La troisième Étape expose un nouveau concept d’ajout d’une inertie virtuelle aux réseaux électriques, par le procédé de modulation de la puissance réactive. Il a ensuite été présenté l’utilisation de la BSE en guise de réserve primaire de fréquence. Un modèle générique de BSE, associé à l’interconnexion du Québec, a enfin été proposé dans un environnement MATLAB. Les résultats de simulations confirment la possibilité de l’utilisation des puissances active et réactive du système de la BSE en vue de la régulation de fréquence.
Resumo:
This work deals with teacher-student relationship (TSR), held in a very special moment: the semester of graduation of architecture and urbanism, where students prepare the final work called Graduation Final Work (GFW). That is the last stage to obtain the title of architect and urban planner in Brazil. The text discusses this problem in several ways, emphasing the relationship between graduated student and his/her mentors into the consolidation process of the student as an actor of the planning process, here defined as "autonomy". The work is focused on understanding the TSR in order to elucidate its importance for improvement of teaching bases on development of the GFW, more than the relation between curriculum and institution. Related with the exploratory characteristic of this master thesis, methodologically, the field work happened through: (i) observation of mentorship guidance, (ii) interviews, and (iii) application of questionnaires to teachers and students. Participated 10 pairs of student and mentors of two federal university of Northeast: 05 pairs of the Federal University of Ceará (UFC) and 05 pairs of the Federal University of Rio Grande do Norte (UFRN). The results presented the development of the GFW as a difficult process / fearful for students, highlighting the main problem situations: the difficulty in choosing the theme, the super-sizing of the process, students' insecurities, and parental relationship with the supervisor process. Summing up, the work indicates that the students has a limited autonomy on the GFW process, which calls for a revision in order to promote the consolidation of the student autonomy, which must be observed for some positions recognition of the role of each actor in the process of orientation
Resumo:
The interaction of magnetic fields generated by large superconducting coils has multiple applications in space, including actuation of spacecraft or spacecraft components, wireless power transfer, and shielding of spacecraft from radiation and high energy particles. These applications require coils with major diameters as large as 20 meters and a thermal management system to maintain the superconducting material of the coil below its critical temperature. Since a rigid thermal management system, such as a heat pipe, is unsuitable for compact stowage inside a 5 meter payload fairing, a thin-walled thermal enclosure is proposed. A 1.85 meter diameter test article consisting of a bladder layer for containing chilled nitrogen vapor, a restraint layer, and multilayer insulation was tested in a custom toroidal vacuum chamber. The material properties found during laboratory testing are used to predict the performance of the test article in low Earth orbit. Deployment motion of the same test article was measured using a motion capture system and the results are used to predict the deployment in space. A 20 meter major diameter and coil current of 6.7 MA is selected as a point design case. This design point represents a single coil in a high energy particle shielding system. Sizing of the thermal and structural components of the enclosure is completed. The thermal and deployment performance is predicted.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização de Estruturas
Resumo:
Global Network for the Molecular Surveillance of Tuberculosis 2010: A. Miranda (Tuberculosis Laboratory of the National Institute of Health, Porto, Portugal)
Resumo:
The value of integrating a heat storage into a geothermal district heating system has been investigated. The behaviour of the system under a novel operational strategy has been simulated focusing on the energetic, economic and environmental effects of the new strategy of incorporation of the heat storage within the system. A typical geothermal district heating system consists of several production wells, a system of pipelines for the transportation of the hot water to end-users, one or more re-injection wells and peak-up devices (usually fossil-fuel boilers). Traditionally in these systems, the production wells change their production rate throughout the day according to heat demand, and if their maximum capacity is exceeded the peak-up devices are used to meet the balance of the heat demand. In this study, it is proposed to maintain a constant geothermal production and add heat storage into the network. Subsequently, hot water will be stored when heat demand is lower than the production and the stored hot water will be released into the system to cover the peak demands (or part of these). It is not intended to totally phase-out the peak-up devices, but to decrease their use, as these will often be installed anyway for back-up purposes. Both the integration of a heat storage in such a system as well as the novel operational strategy are the main novelties of this thesis. A robust algorithm for the sizing of these systems has been developed. The main inputs are the geothermal production data, the heat demand data throughout one year or more and the topology of the installation. The outputs are the sizing of the whole system, including the necessary number of production wells, the size of the heat storage and the dimensions of the pipelines amongst others. The results provide several useful insights into the initial design considerations for these systems, emphasizing particularly the importance of heat losses. Simulations are carried out for three different cases of sizing of the installation (small, medium and large) to examine the influence of system scale. In the second phase of work, two algorithms are developed which study in detail the operation of the installation throughout a random day and a whole year, respectively. The first algorithm can be a potentially powerful tool for the operators of the installation, who can know a priori how to operate the installation on a random day given the heat demand. The second algorithm is used to obtain the amount of electricity used by the pumps as well as the amount of fuel used by the peak-up boilers over a whole year. These comprise the main operational costs of the installation and are among the main inputs of the third part of the study. In the third part of the study, an integrated energetic, economic and environmental analysis of the studied installation is carried out together with a comparison with the traditional case. The results show that by implementing heat storage under the novel operational strategy, heat is generated more cheaply as all the financial indices improve, more geothermal energy is utilised and less fuel is used in the peak-up boilers, with subsequent environmental benefits, when compared to the traditional case. Furthermore, it is shown that the most attractive case of sizing is the large one, although the addition of the heat storage most greatly impacts the medium case of sizing. In other words, the geothermal component of the installation should be sized as large as possible. This analysis indicates that the proposed solution is beneficial from energetic, economic, and environmental perspectives. Therefore, it can be stated that the aim of this study is achieved in its full potential. Furthermore, the new models for the sizing, operation and economic/energetic/environmental analyses of these kind of systems can be used with few adaptations for real cases, making the practical applicability of this study evident. Having this study as a starting point, further work could include the integration of these systems with end-user demands, further analysis of component parts of the installation (such as the heat exchangers) and the integration of a heat pump to maximise utilisation of geothermal energy.
Resumo:
Crossing the Franco-Swiss border, the Large Hadron Collider (LHC), designed to collide 7 TeV proton beams, is the world's largest and most powerful particle accelerator the operation of which was originally intended to commence in 2008. Unfortunately, due to an interconnect discontinuity in one of the main dipole circuit's 13 kA superconducting busbars, a catastrophic quench event occurred during initial magnet training, causing significant physical system damage. Furthermore, investigation into the cause found that such discontinuities were not only present in the circuit in question, but throughout the entire LHC. This prevented further magnet training and ultimately resulted in the maximum sustainable beam energy being limited to approximately half that of the design nominal, 3.5-4 TeV, for the first three years of operation (Run 1, 2009-2012) and a major consolidation campaign being scheduled for the first long shutdown (LS 1, 2012-2014). Throughout Run 1, a series of studies attempted to predict the amount of post-installation training quenches still required to qualify each circuit to nominal-energy current levels. With predictions in excess of 80 quenches (each having a recovery time of 8-12+ hours) just to achieve 6.5 TeV and close to 1000 quenches for 7 TeV, it was decided that for Run 2, all systems be at least qualified for 6.5 TeV operation. However, even with all interconnect discontinuities scheduled to be repaired during LS 1, numerous other concerns regarding circuit stability arose. In particular, observations of an erratic behaviour of magnet bypass diodes and the degradation of other potentially weak busbar sections, as well as observations of seemingly random millisecond spikes in beam losses, known as unidentified falling object (UFO) events, which, if persist at 6.5 TeV, may eventually deposit sufficient energy to quench adjacent magnets. In light of the above, the thesis hypothesis states that, even with the observed issues, the LHC main dipole circuits can safely support and sustain near-nominal proton beam energies of at least 6.5 TeV. Research into minimising the risk of magnet training led to the development and implementation of a new qualification method, capable of providing conclusive evidence that all aspects of all circuits, other than the magnets and their internal joints, can safely withstand a quench event at near-nominal current levels, allowing for magnet training to be carried out both systematically and without risk. This method has become known as the Copper Stabiliser Continuity Measurement (CSCM). Results were a success, with all circuits eventually being subject to a full current decay from 6.5 TeV equivalent current levels, with no measurable damage occurring. Research into UFO events led to the development of a numerical model capable of simulating typical UFO events, reproducing entire Run 1 measured event data sets and extrapolating to 6.5 TeV, predicting the likelihood of UFO-induced magnet quenches. Results provided interesting insights into the involved phenomena as well as confirming the possibility of UFO-induced magnet quenches. The model was also capable of predicting that such events, if left unaccounted for, are likely to be commonplace or not, resulting in significant long-term issues for 6.5+ TeV operation. Addressing the thesis hypothesis, the following written works detail the development and results of all CSCM qualification tests and subsequent magnet training as well as the development and simulation results of both 4 TeV and 6.5 TeV UFO event modelling. The thesis concludes, post-LS 1, with the LHC successfully sustaining 6.5 TeV proton beams, but with UFO events, as predicted, resulting in otherwise uninitiated magnet quenches and being at the forefront of system availability issues.
Resumo:
A presente dissertação centra-se no estudo das implicações originadas, ao nível das soluções construtivas presentes na envolvente dos edifícios de habitação, pelas recentes alterações efetuadas ao Regulamento de Desempenho Energético de Edifícios de Habitação (REH). Com o intuito de aferir o desempenho energético, através da aplicação do REH, considerou-se como caso de estudo um edifício de habitação novo, unifamiliar com tipologia T3, localizado a cerca de 10 metros acima do nível médio das águas do mar e na periferia da zona urbana de Vila Nova de Gaia. Após o levantamento das necessidades energéticas do edifício em estudo, realizaram-se diversas simulações, com o intuito de identificar e quantificar as alterações provocadas pela entrada em vigor da Portaria 379-A/2015, de 22 de outubro. Inicialmente estudou-se o comportamento térmico da habitação unifamiliar admitindo diferentes soluções construtivas: as soluções que cumpriam com as exigências em vigor até ao final de 2015 e as que cumprem as imposições atuais. Desta forma tentou perceber-se quais as implicações dessas alterações nas necessidades energéticas da habitação. Em seguida, e utilizando o mesmo conceito da simulação inicial, fez-se um estudo considerando que a fração se situava nas diferentes zonas climáticas existentes em Portugal. Para que tal fosse possível, teve que se considerar a implantação da habitação em diferentes localizações geográficas e a diferentes altitudes. Também se procurou avaliar a importância que as pontes térmicas planas assumem nas transferências de calor, nas duas estações. Assim, foi necessário fazer um pré- dimensionamento da solução estrutural adotada, quantificar a área destes elementos e o respetivo coeficiente de transmissão. Quantificou-se, posteriormente, quais as necessidades energéticas obtidas com a solução estrutural perfeitamente definida e as que se obteriam se se desprezasse a sua existência. Com as análises comparativas dos diferentes resultados obtidos, verificou-se que as atualizações das exigências regulamentares a que os edifícios de habitação estão sujeitos originam grande impacto nos sistemas construtivos adotados.