30 resultados para Non-isothermal analysis
em Instituto Politécnico do Porto, Portugal
Resumo:
As excelentes propriedades mecânicas, associadas ao seu baixo peso, fazem com que os materiais compósitos sejam atualmente dos mais interessantes da nossa sociedade tecnológica. A crescente utilização destes materiais e a excelência dos resultados daí provenientes faz com que estes materiais sejam utilizados em estruturas complexas de responsabilidade, pelo que a sua maquinagem se torna necessária de forma a possibilitar a ligação entre peças. O processo de furação é o mais frequente. O processo de maquinagem de compósitos terá como base os métodos convencionais utilizados nos materiais metálicos. O processo deverá, no entanto, ser convenientemente adaptado, quer a nível de parâmetros, quer a nível de ferramentas a utilizar. As características dos materiais compósitos são bastante particulares pelo que, quando são sujeitos a maquinagem poderão apresentar defeitos tais como delaminação, fissuras intralaminares, arrancamento de fibras ou dano por sobreaquecimento. Para a detecção destes danos, por vezes a inspeção visual não é suficiente, sendo necessário recorrer a processos específicos de análise de danos. Existem já, alguns estudos, cujo âmbito foi a obtenção de furos de qualidade em compósitos, com minimização do dano, não se podendo comparar ainda com a informação existente, no que se refere à maquinagem de materiais metálicos ou ligas metálicas. Desta forma, existe ainda um longo caminho a percorrer, de forma a que o grau de confiança na utilização destes materiais se aproxime aos materiais metálicos. Este trabalho experimental desenvolvido nesta tese assentou essencialmente na furação de placas laminadas e posterior análise dos danos provocados por esta operação. Foi dada especial atenção à medição da delaminação causada pela furação e à resistência mecânica do material após ser maquinado. Os materiais utilizados, para desenvolver este trabalho experimental, foram placas compósitas de carbono/epóxido com duas orientações de fibras diferentes: unidireccionais e em “cross-ply”. Não se conseguiu muita informação, junto do fornecedor, das suas características pelo que se levaram a cabo ensaios que permitiram determinar o seu módulo de elasticidade. Relativamente á sua resistência â tração, como já foi referido, a grande resistência oferecida pelo material, associada às limitações da máquina de ensaios não permitiu chegar a valores conclusivos. Foram usadas três geometrias de ferramenta diferentes: helicoidal, Brad e Step. Os materiais utilizados nas ferramentas, foram o aço rápido (HSS) e o carboneto de tungsténio para as brocas helicoidais de 118º de ângulo de ponta e apenas o carboneto de tungsténio para as brocas Brad e Step. As ferramentas em diamante não foram consideradas neste trabalho, pois, embora sejam reconhecidas as suas boas características para a maquinagem de compósitos, o seu elevado custo não justifica a sua escolha, pelo menos num trabalho académico, como é o caso. As vantagens e desvantagens de cada geometria ou material utilizado foram avaliadas, tanto no que diz respeito à delaminação como á resistência mecânica dos provetes ensaiados. Para a determinação dos valores de delaminação, foi usada a técnica de Raio X. Algum conhecimento já existente relativamente a este processo permitiu definir alguns parâmetros (por exemplo: tempo de exposição das placas ao liquido contrastante), que tornaram acessível o procedimento de obtenção de imagens das placas furadas. Importando estas imagens para um software de desenho (no caso – AutoCad), foi possível medir as áreas delaminadas e chegar a valores para o fator de delaminação de cada furo efetuado. Terminado este processo, todas as placas foram sujeitas a ensaios de esmagamento, de forma a avaliar a forma como os parâmetros de maquinagem afectaram a resistência mecânica do material. De forma resumida, são objetivos deste trabalho: - Caracterizar as condições de corte em materiais compósitos, mais especificamente em fibras de carbono reforçado com matriz epóxida (PRFC); - Caracterização dos danos típicos provocados pela furação destes materiais; - Desenvolvimento de análise não destrutiva (RX) para avaliação dos danos provocados pela furação; - Conhecer modelos existentes com base na mecânica da fratura linear elástica (LEFM); - Definição de conjunto de parâmetros ideais de maquinagem com o fim de minimizar os danos resultantes da mesma, tendo em conta os resultados provenientes dos ensaios de força, da análise não destrutiva e da comparação com modelos de danos existentes e conhecidos.
Resumo:
In real-time systems, there are two distinct trends for scheduling task sets on unicore systems: non-preemptive and preemptive scheduling. Non-preemptive scheduling is obviously not subject to any preemption delay but its schedulability may be quite poor, whereas fully preemptive scheduling is subject to preemption delay, but benefits from a higher flexibility in the scheduling decisions. The time-delay involved by task preemptions is a major source of pessimism in the analysis of the task Worst-Case Execution Time (WCET) in real-time systems. Preemptive scheduling policies including non-preemptive regions are a hybrid solution between non-preemptive and fully preemptive scheduling paradigms, which enables to conjugate both world's benefits. In this paper, we exploit the connection between the progression of a task in its operations, and the knowledge of the preemption delays as a function of its progression. The pessimism in the preemption delay estimation is then reduced in comparison to state of the art methods, due to the increase in information available in the analysis.
Resumo:
A new method for the study and optimization of manu«ipulator trajectories is developed. The novel feature resides on the modeling formulation. Standard system desciptions are based on a set of differential equations which, in general, require laborious computations and may be difficult to analyze. Moreover, the derived algorithms are suited to "deterministic" tasks, such as those appearing in a repetitivework, and are not well adapted to a "random" operation that occurs in intelligent systems interacting with a non-structured and changing environment. These facts motivate the development of alternative models based on distinct concepts. The proposed embedding of statistics and Fourier trasnform gives a new perspective towards the calculation and optimization of the robot trajectories in manipulating tasks.
Resumo:
New arguments proving that successive (repeated) measurements have a memory and actually remember each other are presented. The recognition of this peculiarity can change essentially the existing paradigm associated with conventional observation in behavior of different complex systems and lead towards the application of an intermediate model (IM). This IM can provide a very accurate fit of the measured data in terms of the Prony's decomposition. This decomposition, in turn, contains a small set of the fitting parameters relatively to the number of initial data points and allows comparing the measured data in cases where the “best fit” model based on some specific physical principles is absent. As an example, we consider two X-ray diffractometers (defined in paper as A- (“cheap”) and B- (“expensive”) that are used after their proper calibration for the measuring of the same substance (corundum a-Al2O3). The amplitude-frequency response (AFR) obtained in the frame of the Prony's decomposition can be used for comparison of the spectra recorded from (A) and (B) - X-ray diffractometers (XRDs) for calibration and other practical purposes. We prove also that the Fourier decomposition can be adapted to “ideal” experiment without memory while the Prony's decomposition corresponds to real measurement and can be fitted in the frame of the IM in this case. New statistical parameters describing the properties of experimental equipment (irrespective to their internal “filling”) are found. The suggested approach is rather general and can be used for calibration and comparison of different complex dynamical systems in practical purposes.
Resumo:
The flow rates of drying and nebulizing gas, heat block and desolvation line temperatures and interface voltage are potential electrospray ionization parameters as they may enhance sensitivity of the mass spectrometer. The conditions that give higher sensitivity of 13 pharmaceuticals were explored. First, Plackett-Burman design was implemented to screen significant factors, and it was concluded that interface voltage and nebulizing gas flow were the only factors that influence the intensity signal for all pharmaceuticals. This fractionated factorial design was projected to set a full 2(2) factorial design with center points. The lack-of-fit test proved to be significant. Then, a central composite face-centered design was conducted. Finally, a stepwise multiple linear regression and subsequently an optimization problem solving were carried out. Two main drug clusters were found concerning the signal intensities of all runs of the augmented factorial design. p-Aminophenol, salicylic acid, and nimesulide constitute one cluster as a result of showing much higher sensitivity than the remaining drugs. The other cluster is more homogeneous with some sub-clusters comprising one pharmaceutical and its respective metabolite. It was observed that instrumental signal increased when both significant factors increased with maximum signal occurring when both codified factors are set at level +1. It was also found that, for most of the pharmaceuticals, interface voltage influences the intensity of the instrument more than the nebulizing gas flowrate. The only exceptions refer to nimesulide where the relative importance of the factors is reversed and still salicylic acid where both factors equally influence the instrumental signal. Graphical Abstract ᅟ.
Resumo:
O documento em anexo encontra-se na versão pre-print (versão inicial enviada para o editor).
Resumo:
O documento em anexo encontra-se na versão post-print (versão corrigida pelo editor).
Resumo:
PURPOSE: To analyze and compare the Ground Reaction Forces (GRF), during the stance phase of walking in pregnant women in the 3rd trimester of pregnancy, and non pregnant women. METHODS: 20 women, 10 pregnant and 10 non pregnant, voluntarily took part in this study. GRF were measured (1000 Hz) using a force platform (BERTEC 4060-15), an amplifier (BERTEC AM 6300) and an analogical-digital converter of 16 Bits (Biopac). RESULTS: The study showed that there were significant differences among the two groups concerning absolute values of time of the stance phase. In what concerns to the normalized values the most significant differences were verified in the maximums values of vertical force (Fz3, Fz1) and in the impulse of the antero-posterior force (Fy2), taxes of growth of the vertical force, and in the period of time for the antero-posterior force (Fy) be null. CONCLUSIONS: It is easier for the pregnant to continue forward movement (push-off phase). O smaller growth rates in what concerns to the maximum of the vertical force (Fz1) for the pregnant, can be associated with a slower speed of gait, as an adaptation strategy to maintain the balance, to compensate the alterations in the position of her center of gravity due to the load increase. The data related to the antero-posterior component of the force (Fy), shows that there is a significant difference between the pregnant woman’s left foot and right foot, which accuses a different functional behavior in each one of the feet, during the propulsion phase (TS).
Resumo:
In life cycle impact assessment (LCIA) models, the sorption of the ionic fraction of dissociating organic chemicals is not adequately modeled because conventional non-polar partitioning models are applied. Therefore, high uncertainties are expected when modeling the mobility, as well as the bioavailability for uptake by exposed biota and degradation, of dissociating organic chemicals. Alternative regressions that account for the ionized fraction of a molecule to estimate fate parameters were applied to the USEtox model. The most sensitive model parameters in the estimation of ecotoxicological characterization factors (CFs) of micropollutants were evaluated by Monte Carlo analysis in both the default USEtox model and the alternative approach. Negligible differences of CFs values and 95% confidence limits between the two approaches were estimated for direct emissions to the freshwater compartment; however the default USEtox model overestimates CFs and the 95% confidence limits of basic compounds up to three orders and four orders of magnitude, respectively, relatively to the alternative approach for emissions to the agricultural soil compartment. For three emission scenarios, LCIA results show that the default USEtox model overestimates freshwater ecotoxicity impacts for the emission scenarios to agricultural soil by one order of magnitude, and larger confidence limits were estimated, relatively to the alternative approach.
Resumo:
Food lipid major components are usually analyzed by individual methodologies using diverse extractive procedures for each class. A simple and fast extractive procedure was devised for the sequential analysis of vitamin E, cholesterol, fatty acids, and total fat estimation in seafood, reducing analyses time and organic solvent consumption. Several liquid/liquid-based extractive methodologies using chlorinated and non-chlorinated organic solvents were tested. The extract obtained is used for vitamin E quantification (normal-phase HPLC with fluorescence detection), total cholesterol (normal-phase HPLC with UV detection), fatty acid profile, and total fat estimation (GC-FID), all accomplished in <40 min. The final methodology presents an adequate linearity range and sensitivity for tocopherol and cholesterol, with intra- and inter-day precisions (RSD) from 3 to 11 % for all the components. The developed methodology was applied to diverse seafood samples with positive outcomes, making it a very attractive technique for routine analyses in standard equipped laboratories in the food quality control field.
Resumo:
This study aimed to carry out experimental work to determine, for Newtonian and non-Newtonian fluids, the friction factor (fc) with simultaneous heat transfer, at constant wall temperature as boundary condition, in fully developed laminar flow inside a vertical helical coil. The Newtonian fluids studied were aqueous solutions of glycerol, 25%, 36%, 43%, 59% and 78% (w/w). The non-Newtonian fluids were aqueous solutions of carboxymethylcellulose (CMC), a polymer, with concentrations of 0.2%, 0.3%, 0.4% and 0.6% (w/w) and aqueous solutions of xanthan gum (XG), another polymer, with concentrations of 0.1% and 0.2% (w/w). According to the rheological study done, the polymer solutions had shear-thinning behavior and different values of viscoelasticity. The helical coil used has an internal diameter, curvature ratio, length and pitch, respectively: 0.00483 m, 0.0263, 5.0 m and 11.34 mm. It was concluded that the friction factors, with simultaneous heat transfer, for Newtonian fluids can be calculated using expressions from literature for isothermal flows. The friction factors for CMC and XG solutions are similar to those for Newtonian fluids when the Dean number, based in a generalized Reynolds number, is less than 80. For Dean numbers higher than 80, the friction factors of the CMC solutions are lower those of the XG solutions and of the Newtonian fluids. In this range the friction factors decrease with the increase of the viscometric component of the solution and increase for increasing elastic component. The change of behavior at Dean number 80, for Newtonian and non-Newtonian fluids, is in accordance with the study of Ali [4]. There is a change of behavior at Dean number 80, for Newtonian and non-Newtonian fluids, which is in according to previous studies. The data also showed that the use of the bulk temperature or of the film temperature to calculate the physical properties of the fluid has a residual effect in the friction factor values.
Resumo:
A number of characteristics are boosting the eagerness of extending Ethernet to also cover factory-floor distributed real-time applications. Full-duplex links, non-blocking and priority-based switching, bandwidth availability, just to mention a few, are characteristics upon which that eagerness is building up. But, will Ethernet technologies really manage to replace traditional Fieldbus networks? To this question, Fieldbus fundamentalists often argue that the two technologies are not comparable. In fact, Ethernet technology, by itself, does not include features above the lower layers of the OSI communication model. Where are the higher layers that permit building real industrial applications? And, taking for free that they are available, what is the impact of those protocols, mechanisms and application models on the overall performance of Ethernetbased distributed factory-floor applications? In this paper we provide some contributions that may pave the way towards providing some reasonable answers to these issues.
Resumo:
A number of characteristics are boosting the eagerness of extending Ethernet to also cover factory-floor distributed real-time applications. Full-duplex links, non-blocking and priority-based switching, bandwidth availability, just to mention a few, are characteristics upon which that eagerness is building up. But, will Ethernet technologies really manage to replace traditional Fieldbus networks? Ethernet technology, by itself, does not include features above the lower layers of the OSI communication model. In the past few years, it is particularly significant the considerable amount of work that has been devoted to the timing analysis of Ethernet-based technologies. It happens, however, that the majority of those works are restricted to the analysis of sub-sets of the overall computing and communication system, thus without addressing timeliness at a holistic level. To this end, we are addressing a few inter-linked research topics with the purpose of setting a framework for the development of tools suitable to extract temporal properties of Commercial-Off-The-Shelf (COTS) Ethernet-based factory-floor distributed systems. This framework is being applied to a specific COTS technology, Ethernet/IP. In this paper, we reason about the modelling and simulation of Ethernet/IP-based systems, and on the use of statistical analysis techniques to provide usable results. Discrete event simulation models of a distributed system can be a powerful tool for the timeliness evaluation of the overall system, but particular care must be taken with the results provided by traditional statistical analysis techniques.
Resumo:
Fieldbus communication networks aim to interconnect sensors, actuators and controllers within distributed computer-controlled systems. Therefore, they constitute the foundation upon which real-time applications are to be implemented. A specific class of fieldbus communication networks is based on a simplified version of token-passing protocols, where each station may transfer, at most, a single message per token visit (SMTV). In this paper, we establish an analogy between non-preemptive task scheduling in single processors and the scheduling of messages on SMTV token-passing networks. Moreover, we clearly show that concepts such as blocking and interference in non-preemptive task scheduling have their counterparts in the scheduling of messages on SMTV token-passing networks. Based on this task/message scheduling analogy, we provide pre-run-time schedulability conditions for supporting real-time messages with SMTV token-passing networks. We provide both utilisation-based and response time tests to perform the pre-run-time schedulability analysis of real-time messages on SMTV token-passing networks, considering RM/DM (rate monotonic/deadline monotonic) and EDF (earliest deadline first) priority assignment schemes
Resumo:
WiDom is a wireless prioritized medium access control protocol which offers very large number of priority levels. Hence, it brings the potential to employ non-preemptive static-priority scheduling and schedulability analysis for a wireless channel assuming that the overhead of WiDom is modeled properly. Recent research has created a new version of WiDom (we call it: Slotted WiDom) which offers lower overhead compared to the previous version. In this paper we propose a new schedulability analysis for slotted WiDom and extend it to work for message streams with release jitter. Furthermore, to provide an accurate timing analysis, we must include the effect of transmission faults on message latencies. Thus, in the proposed analysis we consider the existence of different noise sources and develop the analysis for the case where messages are transmitted under noisy wireless channels. Evaluation of the proposed analysis is done by testing the slotted WiDom in two different modes on a real test-bed. The results from the experiments provide a firm validation on our findings.