871 resultados para Construction. Indicators System. Performance. Ergonomics. Validation
Resumo:
O emprego das Forças Armadas Portuguesas em Operações Conjuntas e Combinadas faz-se no âmbito do quadro de alianças de que Portugal faz parte, das quais destacamos a Organização do Tratado do Atlântico Norte. No sentido de melhorar a eficácia operacional, a normalização assume um papel fundamental para que se possa atingir a interoperabilidade entre as forças da Aliança. Assim, este trabalho teve como objetivo principal, analisar e sintetizar os processos de normalização e desenvolvimento de doutrina Militar Conjunta e Combinada nas Forças Armadas Portuguesas. Tem ainda como finalidade, identificar lacunas nos processos nacionais, de forma a contribuir para uma melhor integração e harmonização da doutrina da Aliança, tendo em consideração a doutrina nacional e os interesses nacionais. A nossa investigação baseou-se num raciocínio hipotético-dedutivo, apoiado por uma estratégia qualitativa. O modelo de análise desenvolvido teve por hipóteses, que foram deduzidas de questões derivadas da questão central. Neste processo analítico, identificámos indicadores que permitiram a validação das hipóteses, para este efeito, recorremos a dois estudos de caso e uma análise SWOT. O desenvolvimento de um plano de ação que permita a integração dos processos analisados constitui o principal contributo desta investigação. A necessidade de criar uma Estrutura Coordenadora de Normalização, primordial para agilizar os procedimentos de normalização entre o Ministério da Defesa Nacional, o Estado-Maior General das Forças Armadas e os Ramos, juntamente, com a criação de um Conselho Superior de Doutrina nas Forças Armadas, que defina as prioridades e orientações gerais para o desenvolvimento de doutrina, constituem os principais resultados desta investigação. Abstract: The use of the Portuguese Armed Forces in Combined and Joint Operations is made within the framework of alliances that Portugal is a member, of which we highlight the North Atlantic Treaty Organization. In order to improve the operational efficiency, standardization plays a key role in achieving the interoperability between the forces of the Alliance. The aim of this study is to analyse and synthesize the standardization and development processes of Combined and Joint Military Doctrine in the Portuguese Armed Forces. Identify gaps in national processes, in order to contribute to a better integration and harmonization of the Alliance Doctrine, taking into account the national doctrine and national interests. Our research was based on a hypothetical-deductive argument, supported by a qualitative strategy. The analysis model developed based on assumptions, deducted from derivatives questions of the central question. In this analytical process, we identify indicators that allowed the validation of hypotheses, to achieve this purpose we used two case studies and resort to a SWOT analysis. The development of an action plan that allows the integration of the analysed processes is the main contribution of this research. The need to create a Standardization Coordination Structure it´s crucial to streamline the procedures of normalization between the Ministry of Defence, the Armed Forces General Joint Staff and the Services, along with the establishment of a Doctrine Supreme Council in the Armed Forces, which set the priorities and guidelines for the development of doctrine. This are the main outputs of this research.
Resumo:
Doutoramento em Gestão Interdisciplinar da Paisagem - Instituto Superior de Agronomia / Universidade dos Açores / Universidade de Évora
Resumo:
A Escola Naval apresenta uma taxa de insucesso na ordem dos 64%, acabando por perder, durante o decorrer do curso, mais de metade dos indivíduos admitidos. Este projeto surge com o intuito de perceber as causas associadas ao baixo rendimento, através da análise dos dados dos indivíduos desde a candidatura ao término do curso, para identificar as caraterísticas (designadas de variáveis) dos indivíduos com maior e com menor probabilidade de sucesso na Escola Naval. Os dados são inicialmente analisados graficamente, onde os candidatos são analisados de forma independente, e os admitidos e os finalistas são analisados em conjunto para se avaliar o comportamento das variáveis no início e no fim dos cursos. Um aumento do peso de uma variável no grupo dos finalistas em relação ao grupo dos admitidos é interpretado como um indicador de bom desempenho. É, ainda, utilizada uma análise estatística para avaliar a associação entre o sucesso e cada uma das variáveis no grupo dos admitidos, e, também, para validar as ilações obtidas através da análise gráfica. O estudo revelou que os indivíduos que entram para a Escola Naval com melhores notas e os indivíduos que tomam conhecimento do concurso de candidatura através da internet são os que apresentam uma maior predisposição para o sucesso, enquanto os indivíduos que têm ambos os pais no ativo são os que apresentam menor probabilidade de sucesso.
Resumo:
One of the most exciting discoveries in astrophysics of the last last decade is of the sheer diversity of planetary systems. These include "hot Jupiters", giant planets so close to their host stars that they orbit once every few days; "Super-Earths", planets with sizes intermediate to those of Earth and Neptune, of which no analogs exist in our own solar system; multi-planet systems with planets smaller than Mars to larger than Jupiter; planets orbiting binary stars; free-floating planets flying through the emptiness of space without any star; even planets orbiting pulsars. Despite these remarkable discoveries, the field is still young, and there are many areas about which precious little is known. In particular, we don't know the planets orbiting Sun-like stars nearest to our own solar system, and we know very little about the compositions of extrasolar planets. This thesis provides developments in those directions, through two instrumentation projects.
The first chapter of this thesis concerns detecting planets in the Solar neighborhood using precision stellar radial velocities, also known as the Doppler technique. We present an analysis determining the most efficient way to detect planets considering factors such as spectral type, wavelengths of observation, spectrograph resolution, observing time, and instrumental sensitivity. We show that G and K dwarfs observed at 400-600 nm are the best targets for surveys complete down to a given planet mass and out to a specified orbital period. Overall we find that M dwarfs observed at 700-800 nm are the best targets for habitable-zone planets, particularly when including the effects of systematic noise floors caused by instrumental imperfections. Somewhat surprisingly, we demonstrate that a modestly sized observatory, with a dedicated observing program, is up to the task of discovering such planets.
We present just such an observatory in the second chapter, called the "MINiature Exoplanet Radial Velocity Array," or MINERVA. We describe the design, which uses a novel multi-aperture approach to increase stability and performance through lower system etendue, as well as keeping costs and time to deployment down. We present calculations of the expected planet yield, and data showing the system performance from our testing and development of the system at Caltech's campus. We also present the motivation, design, and performance of a fiber coupling system for the array, critical for efficiently and reliably bringing light from the telescopes to the spectrograph. We finish by presenting the current status of MINERVA, operational at Mt. Hopkins observatory in Arizona.
The second part of this thesis concerns a very different method of planet detection, direct imaging, which involves discovery and characterization of planets by collecting and analyzing their light. Directly analyzing planetary light is the most promising way to study their atmospheres, formation histories, and compositions. Direct imaging is extremely challenging, as it requires a high performance adaptive optics system to unblur the point-spread function of the parent star through the atmosphere, a coronagraph to suppress stellar diffraction, and image post-processing to remove non-common path "speckle" aberrations that can overwhelm any planetary companions.
To this end, we present the "Stellar Double Coronagraph," or SDC, a flexible coronagraphic platform for use with the 200" Hale telescope. It has two focal and pupil planes, allowing for a number of different observing modes, including multiple vortex phase masks in series for improved contrast and inner working angle behind the obscured aperture of the telescope. We present the motivation, design, performance, and data reduction pipeline of the instrument. In the following chapter, we present some early science results, including the first image of a companion to the star delta Andromeda, which had been previously hypothesized but never seen.
A further chapter presents a wavefront control code developed for the instrument, using the technique of "speckle nulling," which can remove optical aberrations from the system using the deformable mirror of the adaptive optics system. This code allows for improved contrast and inner working angles, and was written in a modular style so as to be portable to other high contrast imaging platforms. We present its performance on optical, near-infrared, and thermal infrared instruments on the Palomar and Keck telescopes, showing how it can improve contrasts by a factor of a few in less than ten iterations.
One of the large challenges in direct imaging is sensing and correcting the electric field in the focal plane to remove scattered light that can be much brighter than any planets. In the last chapter, we present a new method of focal-plane wavefront sensing, combining a coronagraph with a simple phase-shifting interferometer. We present its design and implementation on the Stellar Double Coronagraph, demonstrating its ability to create regions of high contrast by measuring and correcting for optical aberrations in the focal plane. Finally, we derive how it is possible to use the same hardware to distinguish companions from speckle errors using the principles of optical coherence. We present results observing the brown dwarf HD 49197b, demonstrating the ability to detect it despite it being buried in the speckle noise floor. We believe this is the first detection of a substellar companion using the coherence properties of light.
Resumo:
Modern power networks incorporate communications and information technology infrastructure into the electrical power system to create a smart grid in terms of control and operation. The smart grid enables real-time communication and control between consumers and utility companies allowing suppliers to optimize energy usage based on price preference and system technical issues. The smart grid design aims to provide overall power system monitoring, create protection and control strategies to maintain system performance, stability and security. This dissertation contributed to the development of a unique and novel smart grid test-bed laboratory with integrated monitoring, protection and control systems. This test-bed was used as a platform to test the smart grid operational ideas developed here. The implementation of this system in the real-time software creates an environment for studying, implementing and verifying novel control and protection schemes developed in this dissertation. Phasor measurement techniques were developed using the available Data Acquisition (DAQ) devices in order to monitor all points in the power system in real time. This provides a practical view of system parameter changes, system abnormal conditions and its stability and security information system. These developments provide valuable measurements for technical power system operators in the energy control centers. Phasor Measurement technology is an excellent solution for improving system planning, operation and energy trading in addition to enabling advanced applications in Wide Area Monitoring, Protection and Control (WAMPAC). Moreover, a virtual protection system was developed and implemented in the smart grid laboratory with integrated functionality for wide area applications. Experiments and procedures were developed in the system in order to detect the system abnormal conditions and apply proper remedies to heal the system. A design for DC microgrid was developed to integrate it to the AC system with appropriate control capability. This system represents realistic hybrid AC/DC microgrids connectivity to the AC side to study the use of such architecture in system operation to help remedy system abnormal conditions. In addition, this dissertation explored the challenges and feasibility of the implementation of real-time system analysis features in order to monitor the system security and stability measures. These indices are measured experimentally during the operation of the developed hybrid AC/DC microgrids. Furthermore, a real-time optimal power flow system was implemented to optimally manage the power sharing between AC generators and DC side resources. A study relating to real-time energy management algorithm in hybrid microgrids was performed to evaluate the effects of using energy storage resources and their use in mitigating heavy load impacts on system stability and operational security.
Resumo:
Managed lane strategies are innovative road operation schemes for addressing congestion problems. These strategies operate a lane (lanes) adjacent to a freeway that provides congestion-free trips to eligible users, such as transit or toll-payers. To ensure the successful implementation of managed lanes, the demand on these lanes need to be accurately estimated. Among different approaches for predicting this demand, the four-step demand forecasting process is most common. Managed lane demand is usually estimated at the assignment step. Therefore, the key to reliably estimating the demand is the utilization of effective assignment modeling processes. Managed lanes are particularly effective when the road is functioning at near-capacity. Therefore, capturing variations in demand and network attributes and performance is crucial for their modeling, monitoring and operation. As a result, traditional modeling approaches, such as those used in static traffic assignment of demand forecasting models, fail to correctly predict the managed lane demand and the associated system performance. The present study demonstrates the power of the more advanced modeling approach of dynamic traffic assignment (DTA), as well as the shortcomings of conventional approaches, when used to model managed lanes in congested environments. In addition, the study develops processes to support an effective utilization of DTA to model managed lane operations. Static and dynamic traffic assignments consist of demand, network, and route choice model components that need to be calibrated. These components interact with each other, and an iterative method for calibrating them is needed. In this study, an effective standalone framework that combines static demand estimation and dynamic traffic assignment has been developed to replicate real-world traffic conditions. With advances in traffic surveillance technologies collecting, archiving, and analyzing traffic data is becoming more accessible and affordable. The present study shows how data from multiple sources can be integrated, validated, and best used in different stages of modeling and calibration of managed lanes. Extensive and careful processing of demand, traffic, and toll data, as well as proper definition of performance measures, result in a calibrated and stable model, which closely replicates real-world congestion patterns, and can reasonably respond to perturbations in network and demand properties.
Resumo:
Question Answering systems that resort to the Semantic Web as a knowledge base can go well beyond the usual matching words in documents and, preferably, find a precise answer, without requiring user help to interpret the documents returned. In this paper, the authors introduce a Dialogue Manager that, through the analysis of the question and the type of expected answer, provides accurate answers to the questions posed in Natural Language. The Dialogue Manager not only represents the semantics of the questions, but also represents the structure of the discourse, including the user intentions and the questions context, adding the ability to deal with multiple answers and providing justified answers. The authors’ system performance is evaluated by comparing with similar question answering systems. Although the test suite is slight dimension, the results obtained are very promising.
Resumo:
O presente trabalho tem como objetivo testar o efeito do sistema de produção com dieta rica em ácido oleico nas características zootécnicas, de carcaça, físico-químicas e no perfil lipídico dos músculos Longissimus dorsi (LD), Biceps femoris (BF) e Semimembranosus (SM) de suínos de raça Alentejana. Foram utilizados 15 animais, divididos em dois grupos, o grupo IN (n=9) foi mantido em parques individuais abertos ao exterior com 3 m2 e o grupo OUT (n=6) foi mantido numa área de 3 ha com acesso a pastagem. O grupo OUT obteve melhores performances zootécnicas. O rendimento de carcaça foi igual para ambos os grupos, e o rendimento comercial e proporção de peças nobres foi superior nos OUT. Estes obtiveram menores teores de lípidos intramusculares mas maiores teores de PUFA (ácidos gordos polinsaturados) e melhores valores para os rácios PUFA/SFA (ácido gordos saturados) e n-6/n-3. O SM foi menos afetado pelo sistema de produção; Abstract: Effect of the rearing system in the physical-chemical characteristics and the lipid profile of three muscles from Alentejano pigs The present study aims to test the effect of rearing system with oleic acid rich diet on performance, carcass and physical-chemical characteristics, and lipid profile of the muscles Longissimus dorsi (LD), Biceps femoris (BF) and Semimembranosus (SM) from Alentejano pigs. The study used 15 animals, divided into two groups. The Group IN (n = 9) was kept in individual parks open to the exterior with 3 m2, and the OUT (n = 6) was held in an area of 3 ha with access to pasture. The OUT group obtained the best performances. The carcass yield was the same for both groups, meat yield and the proportion of noble parts was superior in the OUT. These obtained lower levels of intramuscular fat but larger PUFA levels (polyunsaturated fatty acids) and best values for the ratios PUFA/SFA (saturated fatty acids) and n-6/n-3. The SM was the least affected by the rearing system
Resumo:
This thesis aims to present the ORC technology, its advantages and related problems. In particular, it provides an analysis of ORC waste heat recovery system in different and innovative scenarios, focusing on cases from the biggest to the lowest scale. Both industrial and residential ORC applications are considered. In both applications, the installation of a subcritical and recuperated ORC system is examined. Moreover, heat recovery is considered in absence of an intermediate heat transfer circuit. This solution allow to improve the recovery efficiency, but requiring safety precautions. Possible integrations of ORC systems with renewable sources are also presented and investigated to improve the non-programmable source exploitation. In particular, the offshore oil and gas sector has been selected as a promising industrial large-scale ORC application. From the design of ORC systems coupled with Gas Turbines (GTs) as topper systems, the dynamic behavior of the GT+ORC innovative combined cycles has been analyzed by developing a dynamic model of all the considered components. The dynamic behavior is caused by integration with a wind farm. The electric and thermal aspects have been examined to identify the advantages related to the waste heat recovery system installation. Moreover, an experimental test rig has been realized to test the performance of a micro-scale ORC prototype. The prototype recovers heat from a low temperature water stream, available for instance in industrial or residential waste heat. In the test bench, various sensors have been installed, an acquisitions system developed in Labview environment to completely analyze the ORC behavior. Data collected in real time and corresponding to the system dynamic behavior have been used to evaluate the system performance based on selected indexes. Moreover, various operational steady-state conditions are identified and operation maps are realized for a completely characterization of the system and to detect the optimal operating conditions.
Resumo:
With the advent of 5G, several novel network paradigms and technologies have been proposed to fulfil the key requirements imposed. Flexibility, efficiency and scalability, along with sustainability and convenience for expenditure have to be addressed in targeting these brand new needs. Among novel paradigms introduced in the scientific literature in recent years, a constant and increasing interest lies in the use of unmanned aerial vehicles (UAVs) as network nodes supporting the legacy terrestrial network for service provision. Their inherent features of moving nodes make them able to be deployed on-demand in real-time. Which, in practical terms, means having them acting as a base station (BS) when and where there is the highest need. This thesis investigates then the potential role of UAV-aided mobile radio networks, in order to validate the concept of adding an aerial network component and assess the system performance, from early to later stages of its deployment. This study is intended for 5G and beyond systems, to allow time for the technology to mature. Since advantages can be manyfold, the aerial network component is considered at the network layer under several aspects, from connectivity to radio resource management. A particular emphasis is given to trajectory design, because of the efficiency and flexibility it potentially adds to the infrastructure. Two different frameworks have been proposed, to take into account both a re-adaptable heuristic and an optimal solution. Moreover, diverse use cases are taken under analysis, from mobile broadband to machine and vehicular communications. The thesis aim is thus to discuss the potential and advantages of UAV-aided systems from a broad perspective. Results demonstrate that the technology has good prospects for diverse scenarios with a few arrangements.
Resumo:
In the frame of inductive power transfer (IPT) systems, arrays of magnetically coupled resonators have received increasing attention as they are cheap and versatile due to their simple structure. They consist of magnetically coupled coils, which resonate with their self-capacitance or lumped capacitive networks. Of great industrial interest are planar resonator arrays used to power a receiver that can be placed at any position above the array. A thorough circuit analysis has been carried out, first starting from traditional two-coil IPT devices. Then, resonator arrays have been introduced, with particular attention to the case of arrays with a receiver. To evaluate the system performance, a circuit model based on original analytical formulas has been developed and experimentally validated. The results of the analysis also led to the definition of a new doubly-fed array configuration with a receiver that can be placed above it at any position. A suitable control strategy aimed at maximising the transmitted power and the efficiency has been also proposed. The study of the array currents has been carried out resorting to the theory of magneto-inductive waves, allowing useful insight to be highlighted. The analysis has been completed with a numerical and experimental study on the magnetic field distribution originating from the array. Furthermore, an application of the resonator array as a position sensor has been investigated. The position of the receiver is estimated through the measurement of the array input impedance, for which an original analytical expression has been also obtained. The application of this sensing technique in an automotive dynamic IPT system has been discussed. The thesis concludes with an evaluation of the possible applications of two-dimensional resonator arrays in IPT systems. These devices can be used to improve system efficiency and transmitted power, as well as for magnetic field shielding.
Resumo:
In a context of technological innovation, the aim of this thesis is to develop a technology that has gained interest in both scientific and industrial realms. This technology serves as a viable alternative to outdated and energy-consuming industrial systems. Electro-adhesive devices (EADs) leverage electrostatic forces for grasping objects or adhering to surfaces. The advantage of employing electrostatics lies in its adaptability to various materials without compromising the structure or chemistry of the object or surface. These benefits have led the industry to explore this technology as a replacement for costly vacuum systems and suction cups currently used for handling most products. Furthermore, the broad applicability of this technology extends to extreme environments, such as space with ultra-high vacuum conditions. Unfortunately, research in this area has yet to yield practical results for industrially effective gripper prototyping. This is primarily due to the inherent complexity of electro-adhesive technology, which operates on basic capacitive principles that does not find satisfying physical descriptions. This thesis aims to address these challenges through a series of studies, starting with the manufacturing process and testing of an EAD that has become the standard in our laboratory. It then delves into material and electrode geometry studies to enhance system performance, ultimately presenting potential industrial applications of the technology. All the presented results are encouraging, as they have yielded shear force values three times higher than those previously reported in the literature. The various applications have demonstrated the significant effectiveness of EADs as brakes or, more broadly, in exerting shear forces. This opens up the possibility of utilizing cutting-edge technologies to push the boundaries of technology to the fullest.
Resumo:
Mixing is a fundamental unit operation in the pharmaceutical industry to ensure consistent product quality across different batches. It is usually carried out in mechanically stirred tanks, with a large variety of designs according to the process requirements. A key aspect of pharmaceutical manufacturing is the extensive and meticulous cleaning of the vessels between runs to prevent the risk of contamination. Single-use reactors represent an increasing trend in the industry since they do not require cleaning and sterilization, reducing the need for utilities such as steam to sterilize equipment and the time between production batches. In contrast to traditional stainless steel vessels, single-use reactors consist of a plastic bag used as a vessel and disposed of after use. This thesis aims to characterize the fluid dynamics features and the mixing performance of a commercially available single-use reactor. The characterization employs a combination of various experimental techniques. The analysis starts with the visual observation of the liquid behavior inside the vessel, focusing on the vortex shape evolution at different impeller speeds. The power consumption is then measured using a torque meter to quantify the power number. Particle Image Velocimetry (PIV) is employed to investigate local fluid dynamics properties such as mean flow field and mean and rms velocity profiles. The same experimental setup of PIV is exploited for another optical measurement technique, the Planar Laser-Induced Fluorescence (PLIF). The PLIF measurements complete the characterization of the reactor with the qualitative visualization of the turbulent flow and the quantitative assessment of the system performance through the mixing time. The results confirm good mixing performances for the single-use reactor over the investigated impeller speeds and reveal that the filling volume plays a significant role in the fluid dynamics of the system.
Resumo:
The HACCP system is being increasingly used to ensure food safety. This study investigated the validation of the control measures technique in order to establish performance indicators of this HACCP system in the manufacturing process of Lasagna Bolognese (meat lasagna). Samples were collected along the manufacturing process as a whole, before and after the CCPs. The following microorganism s indicator (MIs) was assessed: total mesophile and faecal coliform counts. The same MIs were analyzed in the final product, as well as, the microbiological standards required by the current legislation. A significant reduction in the total mesophile count was observed after cooking (p < 0.001). After storage, there was a numerical, however non-significant change in the MI count. Faecal coliform counts were also significantly reduced (p < 0.001) after cooking. We were able to demonstrate that the HACCP system allowed us to meet the standards set by both, the company and the Brazilian regulations, proved by the reduction in the established indicators
Resumo:
Background Computerised databases of primary care clinical records are widely used for epidemiological research. In Catalonia, the InformationSystem for the Development of Research in Primary Care (SIDIAP) aims to promote the development of research based on high-quality validated data from primary care electronic medical records. Objective The purpose of this study is to create and validate a scoring system (Registry Quality Score, RQS) that will enable all primary care practices (PCPs) to be selected as providers of researchusable data based on the completeness of their registers. Methods Diseases that were likely to be representative of common diagnoses seen in primary care were selected for RQS calculations. The observed/ expected cases ratio was calculated for each disease. Once we had obtained an estimated value for this ratio for each of the selected conditions we added up the ratios calculated for each condition to obtain a final RQS. Rate comparisons between observed and published prevalences of diseases not included in the RQS calculations (atrial fibrillation, diabetes, obesity, schizophrenia, stroke, urinary incontinenceand Crohn’s disease) were used to set the RQS cutoff which will enable researchers to select PCPs with research-usable data. Results Apart from Crohn’s disease, all prevalences were the same as those published from the RQS fourth quintile (60th percentile) onwards. This RQS cut-off provided a total population of 1 936 443 (39.6% of the total SIDIAP population). Conclusions SIDIAP is highly representative of the population of Catalonia in terms of geographical, age and sex distributions. We report the usefulness of rate comparison as a valid method to establish research-usable data within primary care electronic medical records