954 resultados para Design problems
Resumo:
In this study, the effect of incorporation of recycled glass fibre reinforced plastics (GFRP) waste materials, obtained by means of shredding and milling processes, on mechanical behaviour of polyester polymer mortars (PM) was assessed. For this purpose, different contents of GFRP recyclates, between 4% up to 12% in weight, were incorporated into polyester PM materials as sand aggregates and filler replacements. The effect of the addition of a silane coupling agent to resin binder was also evaluated. Applied waste material was proceeding from the shredding of the leftovers resultant from the cutting and assembly processes of GFRP pultrusion profiles. Currently, these leftovers as well as non-conform products and scrap resulting from pultrusion manufacturing process are landfilled, with additional costs to producers and suppliers. Hence, besides the evident environmental benefits, a viable and feasible solution for these wastes would also conduct to significant economic advantages. Design of experiments and data treatment were accomplish by means of full factorial design approach and analysis of variance ANOVA. Experimental results were promising toward the recyclability of GFRP waste materials as partial replacement of aggregates and reinforcement for PM materials, with significant improvements on mechanical properties of resultant mortars with regards to waste-free formulations.
Resumo:
Dissertação apresentada à Escola Superior de Comunicação Social como parte dos requisitos para obtenção de grau de mestre em Audiovisual e Multimédia.
Resumo:
PROFIBUS is an international standard (IEC 61158) for factory-floor communications, with some hundreds of thousands of world-wide installations. However, it does not include any wireless capabilities. In this paper we propose a hybrid wired/wireless PROFIBUS solution where most of the design options are made in order to guarantee the proper real-time behaviour of the overall network. We address the timing unpredictability problems placed by the co-existence of heterogeneous transmission media in the same network. Moreover, we propose a novel solution to provide inter-cell mobility to PROFIBUS wireless nodes.
Resumo:
As operações de separação por adsorção têm vindo a ganhar importância nos últimos anos, especialmente com o desenvolvimento de técnicas de simulação de leitos móveis em colunas, tal como a cromatografia de Leito Móvel Simulado (Simulated Moving Bed, SMB). Esta tecnologia foi desenvolvida no início dos anos 60 como método alternativo ao processo de Leito Móvel Verdadeiro (True Moving Bed, TMB), de modo a resolver vários dos problemas associados ao movimento da fase sólida, usuais nestes métodos de separação cromatográficos de contracorrente. A tecnologia de SMB tem sido amplamente utilizada em escala industrial principalmente nas indústrias petroquímica e de transformação de açúcares e, mais recentemente, na indústria farmacêutica e de química fina. Nas últimas décadas, o crescente interesse na tecnologia de SMB, fruto do alto rendimento e eficiente consumo de solvente, levou à formulação de diferentes modos de operação, ditos não convencionais, que conseguem unidades mais flexíveis, capazes de aumentar o desempenho de separação e alargar ainda mais a gama de aplicação da tecnologia. Um dos exemplos mais estudados e implementados é o caso do processo Varicol, no qual se procede a um movimento assíncrono de portas. Neste âmbito, o presente trabalho foca-se na simulação, análise e avaliação da tecnologia de SMB para dois casos de separação distintos: a separação de uma mistura de frutose-glucose e a separação de uma mistura racémica de pindolol. Para ambos os casos foram considerados e comparados dois modos de operação da unidade de SMB: o modo convencional e o modo Varicol. Desta forma, foi realizada a implementação e simulação de ambos os casos de separação no simulador de processos Aspen Chromatography, mediante a utilização de duas unidades de SMB distintas (SMB convencional e SMB Varicol). Para a separação da mistura frutose-glucose, no quediz respeito à modelização da unidade de SMB convencional, foram utilizadas duas abordagens: a de um leito móvel verdadeiro (modelo TMB) e a de um leito móvel simulado real (modelo SMB). Para a separação da mistura racémica de pindolol foi considerada apenas a modelização pelo modelo SMB. No caso da separação da mistura frutose-glucose, procedeu-se ainda à otimização de ambas as unidades de SMB convencional e Varicol, com o intuito do aumento das suas produtividades. A otimização foi realizada mediante a aplicação de um procedimento de planeamento experimental, onde as experiências foram planeadas, conduzidas e posteriormente analisadas através da análise de variância (ANOVA). A análise estatística permitiu selecionar os níveis dos fatores de controlo de modo a obter melhores resultados para ambas as unidades de SMB.
Resumo:
A recent and comprehensive review of the use of race and ethnicity in research that address health disparities in epidemiology and public health is provided. First it is described the theoretical basis upon which race and ethnicity differ drawing from previous work in anthropology, social science and public health. Second, it is presented a review of 280 articles published in high impacts factor journals in regards to public health and epidemiology from 2009-2011. An analytical grid enabled the examination of conceptual, theoretical and methodological questions related to the use of both concepts. The majority of articles reviewed were grounded in a theoretical framework and provided interpretations from various models. However, key problems identified include a) a failure from researchers to differentiate between the concepts of race and ethnicity; b) an inappropriate use of racial categories to ascribe ethnicity; c) a lack of transparency in the methods used to assess both concepts; and d) failure to address limits associated with the construction of racial or ethnic taxonomies and their use. In conclusion, future studies examining health disparities should clearly establish the distinction between race and ethnicity, develop theoretically driven research and address specific questions about the relationships between race, ethnicity and health. One argue that one way to think about ethnicity, race and health is to dichotomize research into two sets of questions about the relationship between human diversity and health.
Resumo:
I - As minhas expectativas eram elevadas pois este regresso à Escola Superior de Música de Lisboa permitia-me voltar a trabalhar com os professores que me formaram como músico e professor e com eles poder actualizar-me sobre vários temas ligados à pedagogia. Este aspecto é muito importante pois chego à conclusão que o tempo por vezes provoca-nos excesso de confiança que parece “cegar-nos” não nos deixando ver erros pedagógicos muitas vezes evitáveis. Quando ingressei neste estágio sentia-me confiante e seguro quanto às minhas capacidades como professor. O momento de viragem na minha perspectiva do estágio dá-se quando surgem as observações/gravações e respectivas análises e reflexões das aulas. Procurei trabalhar nessas aulas da forma mais natural possível pois o meu objectivo era observar o meu trabalho diário. A primeira observação das aulas permitiu-me anotar algumas coisas menos boas. Contudo, quando essa observação foi feita com o professor de didática os aspectos menos positivos ganharam uma enorme proporção: (1) falhas ao nível da instrução: demasiado longo, (2) feedback de pouca qualidade ou eficácia , (3) pouca percentagem de alunos que atingiam os objectivos., (4) ritmo de aula por vezes baixo devido a períodos longos de instrução ou devido a uma má gestão do espaço. Todos estes problemas eram mais visíveis quando as turmas eram maiores. Ao longo do estágio, e após a detecção destas falhas, fui procurando evitar estas práticas em todas as turmas onde leccionava. Senti que o ritmo de aula aumentou substancialmente não apenas à custa da energia do professor e de boas estratégias mas porque sobretudo se “falava menos e trabalhava-se mais”. Os erros dos alunos passaram a ser corrigos enquanto trabalhavam (feedback corretivo próximo do momento positivo ou negativo), o feedback positivo passou a ser mais destacado, a disposição da sala alterou-se de forma aos alunos estarem mais perto do professor, e este procurou ser menos “criativo” no momento de alterar o plano de aula devido a ideias momentâneas o que provocou mais tempo para cada estratégia e para que mais alunos fossem atingindo os objectivos. Apesar da evolução no sentido de proporcionar aos alunos aulas mais rentáveis e de ainda melhor qualidade, existe a consciência que alguns dos erros cometidos eram hábitos e como tal poderão levar algum tempo a ser corrigidos. Contudo, existe a consciência e a vontade em debelá-los da minha prática docente.
Resumo:
The main objective of this work was to investigate the application of experimental design techniques for the identification of Michaelis-Menten kinetic parameters. More specifically, this study attempts to elucidate the relative advantages/disadvantages of employing complex experimental design techniques in relation to equidistant sampling when applied to different reactor operation modes. All studies were supported by simulation data of a generic enzymatic process that obeys to the Michaelis-Menten kinetic equation. Different aspects were investigated, such as the influence of the reactor operation mode (batch, fed-batch with pulse wise feeding and fed-batch with continuous feeding) and the experimental design optimality criteria on the effectiveness of kinetic parameters identification. The following experimental design optimality criteria were investigated: 1) minimization of the sum of the diagonal of the Fisher information matrix (FIM) inverse (A-criterion), 2) maximization of the determinant of the FIM (D-criterion), 3) maximization of the smallest eigenvalue of the FIM (E-criterion) and 4) minimization of the quotient between the largest and the smallest eigenvalue (modified E-criterion). The comparison and assessment of the different methodologies was made on the basis of the Cramér-Rao lower bounds (CRLB) error in respect to the parameters vmax and Km of the Michaelis-Menten kinetic equation. In what concerns the reactor operation mode, it was concluded that fed-batch (pulses) is better than batch operation for parameter identification. When the former operation mode is adopted, the vmax CRLB error is lowered by 18.6 % while the Km CRLB error is lowered by 26.4 % when compared to the batch operation mode. Regarding the optimality criteria, the best method was the A-criterion, with an average vmax CRLB of 6.34 % and 5.27 %, for batch and fed-batch (pulses), respectively, while presenting a Km’s CRLB of 25.1 % and 18.1 %, for batch and fed-batch (pulses), respectively. As a general conclusion of the present study, it can be stated that experimental design is justified if the starting parameters CRLB errors are inferior to 19.5 % (vmax) and 45% (Km), for batch processes, and inferior to 42 % and to 50% for fed-batch (pulses) process. Otherwise equidistant sampling is a more rational decision. This conclusion clearly supports that, for fed-batch operation, the use of experimental design is likely to largely improve the identification of Michaelis-Menten kinetic parameters.
Resumo:
Dissertação de Natureza Científica para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização de Hidráulica
Resumo:
Our day-to-day life is dependent on several embedded devices, and in the near future, many more objects will have computation and communication capabilities enabling an Internet of Things. Correspondingly, with an increase in the interaction of these devices around us, developing novel applications is set to become challenging with current software infrastructures. In this paper, we argue that a new paradigm for operating systems needs to be conceptualized to provide aconducive base for application development on Cyber-physical systems. We demonstrate its need and importance using a few use-case scenarios and provide the design principles behind, and an architecture of a co-operating system or CoS that can serve as an example of this new paradigm.
Resumo:
This article addresses the problem of obtaining reduced complexity models of multi-reach water delivery canals that are suitable for robust and linear parameter varying (LPV) control design. In the first stage, by applying a method known from the literature, a finite dimensional rational transfer function of a priori defined order is obtained for each canal reach by linearizing the Saint-Venant equations. Then, by using block diagrams algebra, these different models are combined with linearized gate models in order to obtain the overall canal model. In what concerns the control design objectives, this approach has the advantages of providing a model with prescribed order and to quantify the high frequency uncertainty due to model approximation. A case study with a 3-reach canal is presented, and the resulting model is compared with experimental data. © 2014 IEEE.
Resumo:
The National Cancer Institute (NCI) method allows the distributions of usual intake of nutrients and foods to be estimated. This method can be used in complex surveys. However, the user must perform additional calculations, such as balanced repeated replication (BRR), in order to obtain standard errors and confidence intervals for the percentiles and mean from the distribution of usual intake. The objective is to highlight adaptations of the NCI method using data from the National Dietary Survey. The application of the NCI method was exemplified analyzing the total energy (kcal) and fruit (g) intake, comparing estimations of mean and standard deviation that were based on the complex design of the Brazilian survey with those assuming simple random sample. Although means point estimates were similar, estimates of standard error using the complex design increased by up to 60% compared to simple random sample. Thus, for valid estimates of food and energy intake for the population, all of the sampling characteristics of the surveys should be taken into account because when these characteristics are neglected, statistical analysis may produce underestimated standard errors that would compromise the results and the conclusions of the survey.
Resumo:
Cluster scheduling and collision avoidance are crucial issues in large-scale cluster-tree Wireless Sensor Networks (WSNs). The paper presents a methodology that provides a Time Division Cluster Scheduling (TDCS) mechanism based on the cyclic extension of RCPS/TC (Resource Constrained Project Scheduling with Temporal Constraints) problem for a cluster-tree WSN, assuming bounded communication errors. The objective is to meet all end-to-end deadlines of a predefined set of time-bounded data flows while minimizing the energy consumption of the nodes by setting the TDCS period as long as possible. Sinceeach cluster is active only once during the period, the end-to-end delay of a given flow may span over several periods when there are the flows with opposite direction. The scheduling tool enables system designers to efficiently configure all required parameters of the IEEE 802.15.4/ZigBee beaconenabled cluster-tree WSNs in the network design time. The performance evaluation of thescheduling tool shows that the problems with dozens of nodes can be solved while using optimal solvers.
Resumo:
Variations of manufacturing process parameters and environmental aspects may affect the quality and performance of composite materials, which consequently affects their structural behaviour. Reliability-based design optimisation (RBDO) and robust design optimisation (RDO) searches for safe structural systems with minimal variability of response when subjected to uncertainties in material design parameters. An approach that simultaneously considers reliability and robustness is proposed in this paper. Depending on a given reliability index imposed on composite structures, a trade-off is established between the performance targets and robustness. Robustness is expressed in terms of the coefficient of variation of the constrained structural response weighted by its nominal value. The Pareto normed front is built and the nearest point to the origin is estimated as the best solution of the bi-objective optimisation problem.
Resumo:
An approach for the analysis of uncertainty propagation in reliability-based design optimization of composite laminate structures is presented. Using the Uniform Design Method (UDM), a set of design points is generated over a domain centered on the mean reference values of the random variables. A methodology based on inverse optimal design of composite structures to achieve a specified reliability level is proposed, and the corresponding maximum load is outlined as a function of ply angle. Using the generated UDM design points as input/output patterns, an Artificial Neural Network (ANN) is developed based on an evolutionary learning process. Then, a Monte Carlo simulation using ANN development is performed to simulate the behavior of the critical Tsai number, structural reliability index, and their relative sensitivities as a function of the ply angle of laminates. The results are generated for uniformly distributed random variables on a domain centered on mean values. The statistical analysis of the results enables the study of the variability of the reliability index and its sensitivity relative to the ply angle. Numerical examples showing the utility of the approach for robust design of angle-ply laminates are presented.
Resumo:
Relatório de Estágio para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização em Edificações