11 resultados para Objective element
Resumo:
The theme of this dissertation is the finite element method applied to mechanical structures. A new finite element program is developed that, besides executing different types of structural analysis, also allows the calculation of the derivatives of structural performances using the continuum method of design sensitivities analysis, with the purpose of allowing, in combination with the mathematical programming algorithms found in the commercial software MATLAB, to solve structural optimization problems. The program is called EFFECT – Efficient Finite Element Code. The object-oriented programming paradigm and specifically the C ++ programming language are used for program development. The main objective of this dissertation is to design EFFECT so that it can constitute, in this stage of development, the foundation for a program with analysis capacities similar to other open source finite element programs. In this first stage, 6 elements are implemented for linear analysis: 2-dimensional truss (Truss2D), 3-dimensional truss (Truss3D), 2-dimensional beam (Beam2D), 3-dimensional beam (Beam3D), triangular shell element (Shell3Node) and quadrilateral shell element (Shell4Node). The shell elements combine two distinct elements, one for simulating the membrane behavior and the other to simulate the plate bending behavior. The non-linear analysis capability is also developed, combining the corotational formulation with the Newton-Raphson iterative method, but at this stage is only avaiable to solve problems modeled with Beam2D elements subject to large displacements and rotations, called nonlinear geometric problems. The design sensitivity analysis capability is implemented in two elements, Truss2D and Beam2D, where are included the procedures and the analytic expressions for calculating derivatives of displacements, stress and volume performances with respect to 5 different design variables types. Finally, a set of test examples were created to validate the accuracy and consistency of the result obtained from EFFECT, by comparing them with results published in the literature or obtained with the ANSYS commercial finite element code.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova da Lisboa para obtenção do grau de Mestre em Engenharia e Gestão Industrial (MEGI)
Resumo:
The objective of great investments in telecommunication networks is to approach economies and put an end to the asymmetries. The most isolated regions could be the beneficiaries of this new technological investments wave disseminating trough the territories. The new economic scenarios created by globalisation make high capacity backbones and coherent information society polity, two instruments that could change regions fate and launch them in to an economic development context. Technology could bring international projection to services or products and could be the differentiating element between a national and an international economic strategy. So, the networks and its fluxes are becoming two of the most important variables to the economies. Measuring and representing this new informational accessibility, mapping new communities, finding new patterns and localisation models, could be today’s challenge. In the physical and real space, location is defined by two or three geographical co-ordinates. In the network virtual space or in cyberspace, geography seems incapable to define location, because it doesn’t have a good model. Trying to solve the problem and based on geographical theories and concepts, new fields of study came to light. The Internet Geography, Cybergeography or Geography of Cyberspace are only three examples. In this paper and using Internet Geography and informational cartography, it was possible to observe and analyse the spacialisation of the Internet phenomenon trough the distribution of the IP addresses in the Portuguese territory. This work shows the great potential and applicability of this indicator to Internet dissemination and regional development studies. The Portuguese territory is seen in a completely new form: the IP address distribution of Country Code Top Level Domains (.pt) could show new regional hierarchies. The spatial concentration or dispersion of top level domains seems to be a good instrument to reflect the info-structural dynamic and economic development of a territory, especially at regional level.
Resumo:
Some of the properties sought in seismic design of buildings are also considered fundamental to guarantee structural robustness. Moreover, some key concepts are common to both seismic and robustness design. In fact, both analyses consider events with a very small probability of occurrence, and consequently, a significant level of damage is admissible. As very rare events,in both cases, the actions are extremely hard to quantify. The acceptance of limited damage requires a system based analysis of structures, rather than an element by element methodology, as employed for other load cases. As for robustness analysis, in seismic design the main objective is to guarantee that the structure survives an earthquake, without extensive damage. In the case of seismic design, this is achieved by guaranteeing the dissipation of energy through plastic hinges distributed in the structure. For this to be possible, some key properties must be assured, in particular ductility and redundancy. The same properties could be fundamental in robustness design, as a structure can only sustain significant damage if capable of distributing stresses to parts of the structure unaffected by the triggering event. Timber is often used for primary load‐bearing elements in single storey long‐span structures for public buildings and arenas, where severe consequences can be expected if one or more of the primary load bearing elements fail. The structural system used for these structures consists of main frames, secondary elements and bracing elements. The main frame, composed by columns and beams, can be seen as key elements in the system and should be designed with high safety against failure and under strict quality control. The main frames may sometimes be designed with moment resisting joints between columns and beams. Scenarios, where one or more of these key elements, fail should be considered at least for high consequence buildings. Two alternative strategies may be applied: isolation of collapsing sections and, provision of alternate load paths [1]. The first one is relatively straightforward to provide by deliberately designing the secondary structural system less strong and stiff. Alternatively, the secondary structural system and the bracing system can be design so that loss of capacity in the main frame does not lead to the collapse. A case study has been selected aiming to assess the consequences of these two different strategies, in particular, under seismic loads.
Resumo:
PLos One, 4(11): ARTe7722
Resumo:
Chapter in Merrill, Barbara (ed.) (2009) Learning to Change? The Role of Identity and Learning Careers in Adult Education. Hamburg: Peter Lang Publishers. URL: http://www.peterlang.com/ index.cfm?vID=58279&vLang=E&vHR=1&vUR=2&vUUR=1
Resumo:
Dissertation presented to obtain the Ph.D degree in Biology
Resumo:
RESUMO: As Análises Clínicas são um precioso elemento entre os meios complementares de diagnóstico e terapêutica permitindo uma enorme panóplia de informações sobre o estado de saúde de determinado utente. O objetivo do laboratório é fornecer informação analítica sobre as amostras biológicas, sendo esta caracterizada pela sua fiabilidade, relevância e facultada em tempo útil. Assim, tratando-se de saúde, e mediante o propósito do laboratório, é notória a sua importância, bem como, a dos fatores associados para o cumprimento do mesmo. O bom desenrolar do ciclo laboratorial, compreendido pelas fases pré-analítica, analítica e pós-analítica é crucial para que o objetivo do laboratório seja cumprido com rigor e rapidez. O presente trabalho “O Erro na Fase Pré-Analítica: Amostras Não Conformes versus Procedimentos”, enquadrado no mestrado de Qualidade e Organização no Laboratório de Análises Clínicas, pretendeu enfatizar a importância da fase pré- analítica, sendo ela apontada como a primordial em erros que acabam por atrasar a saída de resultados ou por permitir que os mesmos não sejam fidedignos como se deseja, podendo acarretar falsos diagnósticos e decisões clínicas erradas. Esta fase, iniciada no pedido médico e finalizada com a chegada das amostras biológicas ao laboratório está entregue a uma diversidade de procedimentos que acarretam, por si só, uma grande diversidade de intervenientes, para além de uma variabilidade de factores que influenciam a amostra e seus resultados. Estes fatores, que podem alterar de algum modo a “veracidade” dos resultados analíticos, devem ser identificados e tidos em consideração para que estejamos convitos que os resultados auxiliam diagnósticos precisos e uma avaliação correta do estado do utente. As colheitas que por quaisquer divergências não originam amostras que cumpram o objectivo da sua recolha, não estando por isso em conformidade com o pretendido, constituem uma importante fonte de erro para esta fase pré-analítica. Neste estudo foram consultados os dados relativos a amostras de sangue e urina não conformes detetadas no laboratório, em estudo, durante o 1º trimestre de 2012, para permitir conhecer o tipo de falhas que acontecem e a sua frequência. Aos Técnicos de Análises Clínicas, colaboradores do laboratório, foi-lhes pedido que respondessem a um questionário sobre os seus procedimentos quotidianos e constituíssem, assim, a população desta 2ª parte do projeto. Preenchido e devolvido de forma anónima, este questionário pretendeu conhecer os procedimentos na tarefa de executar colheitas e, hipoteticamente, confrontá-los com as amostras não conformes verificadas. No 1ºsemestre de 2012 e num total de 25319 utentes registaram-se 146 colheitas que necessitaram de repetição por se verificarem não conformes. A “amostra não colhida” foi a não conformidade mais frequente (50%) versus a “má identificação” que registou somente 1 acontecimento. Houve ainda não conformidades que não se registaram como “preparação inadequada” e “amostra mal acondicionada”. Os técnicos revelaram-se profissionais competentes, conhecedores das tarefas a desempenhar e preocupados em executá-las com qualidade. Eliminar o erro não estará, seguramente, ao nosso alcance porém admitir a sua presença, detetá-lo e avaliar a sua frequência fará com que possamos diminuir a sua existência e melhorar a qualidade na fase pré-analítica, atribuindo-lhe a relevância que desempenha no processo laboratorial.-----------ABSTRACT:Clinical analyses are a precious element among diagnostic and therapeutic tests as they allow an enormous variety of information on the state of health of a user. The aim of the laboratory is to supply reliable, relevant and timely analytical information on biological samples. In health-related matters, in accordance with the objective of the laboratory, their importance is vital, as is the assurance that all the tools are in place for the fulfillment of its purpose. A good laboratory cycle, which includes the pre-analytical, analytical and post-analytical phases, is crucial in fulfilling the laboratory’s mission rapidly and efficiently. The present work - "Error in the pre-analytical phase: non-compliant samples versus procedures”, as part of the Master’s in Quality and Organization in the Clinical Analyses Laboratory, wishes to emphasize the importance of the pre-analytical phase, as the phase containing most errors which eventually lead to delays in the issue of results, or the one which enables those results not to be as reliable as desired, which can lead to false diagnosis and wrong clinical decisions. This phase, which starts with the medical request and ends with the arrival of the biological samples to the laboratory, entails a variety of procedures, which require the intervention of different players, not to mention a great number of factors, which influence the sample and the results. These factors, capable of somehow altering the “truth” of the analytical results, must be identified and taken into consideration so that we may ensure that the results help to make precise diagnoses and a correct evaluation of the user’s condition. Those collections which, due to any type of differences, do not originate samples capable of fulfilling their purpose, and are therefore not compliant with the objective, constitute an important source of error in this pre-analytical phase. In the present study, we consulted data from non-compliant blood and urine samples, detected at the laboratory during the 1st quarter of 2012, to find out the type of faults that happen and their frequency. The clinical analysis technicians working at the laboratory were asked to fill out a questionnaire regarding their daily procedures, forming in this way the population for this second part of the project. Completed and returned anonymously, this questionnaire intended to investigate the procedures for collections and, hypothetically, confront them with the verified non-compliant samples. In the first semester of 2012, and out of a total of 25319 users, 146 collections had to be repeated due to non-compliance. The “uncollected sample” was the most frequent non-compliance (>50%) versus “incorrect identification” which had only one occurrence. There were also unregistered non-compliance issues such as “inadequate preparation” and “inappropriately packaged sample”. The technicians proved to be competent professionals, with knowledge of the tasks they have to perform and eager to carry them out efficiently. We will certainly not be able to eliminate error, but recognizing its presence, detecting it and evaluating its frequency will help to decrease its occurrence and improve quality in the pre-analytical phase, giving it the relevance it has within the laboratory process.
Resumo:
The main objective of this work was the development of polymeric structures, gel and films, generated from the dissolution of the Chitin-Glucan Complex (CGC) in biocompatible ionic liquids for biomedical applications. Similar as chitin, CGC is only soluble in some special solvents which are toxic and corrosive. Due to this fact and the urgent development of biomedical applications, the need to use biocompatible ionic liquids to dissolve the CGC is indispensable. For the dissolution of CGC, the biocompatible ionic liquid used was Choline acetate. Two different CGC’s, KiOnutrime from KitoZyme and biologically produced CGC from Faculdade de Ciencias e Tecnologia (FCT) - Universidade Nova de Lisboa, were characterized in order to develop biocompatible wound dressing materials. The similar result is shown in term of the ratio of chitin:glucan, which is 1:1.72 for CGC-FCT and 1:1.69 for CGC-Commercial. For the analysis of metal element content, water and inorganic salts content and protein content, both polymers showed some discrepancies, where the content in CGC-FCT is always higher compared to the commercial one. The different characterization results between CGC-FCT and CGC-Commercial could be addressed to differences in the purification method, and the difference of its original strain yeast, whereas CGC-FCT is derived from P.pastoris and the commercial CGC is from A.niger. This work also investigated the effect of biopolymers, temperature dissolution, non-solvent composition on the characteristics of generated polymeric structure with biocompatible ionic liquid. The films were prepared by casting a polymer mixture, immersion in a non-solvent, followed by drying at ambient temperature. Three different non-solvents were tested in phase inversion method, i.e. water, methanol, and glycerol. The results indicate that the composition of non-solvent in the coagulation bath has great influence in generated polymeric structure. Water was found to be the best coagulant for producing a CGC polymeric film structure. The characterizations that have been done include the analysis of viscosity and viscoelasticity measurement, as well as sugar composition in the membrane and total sugar that was released during the phase inversion method. The rheology test showed that both polymer mixtures exhibit a non- Newtonian shear thinning behaviour. Where the viscosity and viscoelasticity test reveal that CGCFCT mixture has a typical behaviour of a viscous solution with entangled polymer chains and CGCCommercial mixture has true gel behaviour. The experimental results show us that the generated CGC solution from choline acetate could be used to develop both polymeric film structure and gel. The generated structures are thermally stable at 100° C, and are hydrophilic. The produced films have dense structure and mechanical stabilities against puncture up to 60 kPa.
Resumo:
With the continuum growth of Internet connected devices, the scalability of the protocols used for communication between them is facing a new set of challenges. In robotics these communications protocols are an essential element, and must be able to accomplish with the desired communication. In a context of a multi-‐‑agent platform, the main types of Internet communication protocols used in robotics, mission planning and task allocation problems will be revised. It will be defined how to represent a message and how to cope with their transport between devices in a distributed environment, reviewing all the layers of the messaging process. A review of the ROS platform is also presented with the intent of integrating the already existing communication protocols with the ServRobot, a mobile autonomous robot, and the DVA, a distributed autonomous surveillance system. This is done with the objective of assigning missions to ServRobot in a security context.
Resumo:
Economics is a social science which, therefore, focuses on people and on the decisions they make, be it in an individual context, or in group situations. It studies human choices, in face of needs to be fulfilled, and a limited amount of resources to fulfill them. For a long time, there was a convergence between the normative and positive views of human behavior, in that the ideal and predicted decisions of agents in economic models were entangled in one single concept. That is, it was assumed that the best that could be done in each situation was exactly the choice that would prevail. Or, at least, that the facts that economics needed to explain could be understood in the light of models in which individual agents act as if they are able to make ideal decisions. However, in the last decades, the complexity of the environment in which economic decisions are made and the limits on the ability of agents to deal with it have been recognized, and incorporated into models of decision making in what came to be known as the bounded rationality paradigm. This was triggered by the incapacity of the unboundedly rationality paradigm to explain observed phenomena and behavior. This thesis contributes to the literature in three different ways. Chapter 1 is a survey on bounded rationality, which gathers and organizes the contributions to the field since Simon (1955) first recognized the necessity to account for the limits on human rationality. The focus of the survey is on theoretical work rather than the experimental literature which presents evidence of actual behavior that differs from what classic rationality predicts. The general framework is as follows. Given a set of exogenous variables, the economic agent needs to choose an element from the choice set that is avail- able to him, in order to optimize the expected value of an objective function (assuming his preferences are representable by such a function). If this problem is too complex for the agent to deal with, one or more of its elements is simplified. Each bounded rationality theory is categorized according to the most relevant element it simplifes. Chapter 2 proposes a novel theory of bounded rationality. Much in the same fashion as Conlisk (1980) and Gabaix (2014), we assume that thinking is costly in the sense that agents have to pay a cost for performing mental operations. In our model, if they choose not to think, such cost is avoided, but they are left with a single alternative, labeled the default choice. We exemplify the idea with a very simple model of consumer choice and identify the concept of isofin curves, i.e., sets of default choices which generate the same utility net of thinking cost. Then, we apply the idea to a linear symmetric Cournot duopoly, in which the default choice can be interpreted as the most natural quantity to be produced in the market. We find that, as the thinking cost increases, the number of firms thinking in equilibrium decreases. More interestingly, for intermediate levels of thinking cost, an equilibrium in which one of the firms chooses the default quantity and the other best responds to it exists, generating asymmetric choices in a symmetric model. Our model is able to explain well-known regularities identified in the Cournot experimental literature, such as the adoption of different strategies by players (Huck et al. , 1999), the inter temporal rigidity of choices (Bosch-Dom enech & Vriend, 2003) and the dispersion of quantities in the context of di cult decision making (Bosch-Dom enech & Vriend, 2003). Chapter 3 applies a model of bounded rationality in a game-theoretic set- ting to the well-known turnout paradox in large elections, pivotal probabilities vanish very quickly and no one should vote, in sharp contrast with the ob- served high levels of turnout. Inspired by the concept of rhizomatic thinking, introduced by Bravo-Furtado & Côrte-Real (2009a), we assume that each per- son is self-delusional in the sense that, when making a decision, she believes that a fraction of the people who support the same party decides alike, even if no communication is established between them. This kind of belief simplifies the decision of the agent, as it reduces the number of players he believes to be playing against { it is thus a bounded rationality approach. Studying a two-party first-past-the-post election with a continuum of self-delusional agents, we show that the turnout rate is positive in all the possible equilibria, and that it can be as high as 100%. The game displays multiple equilibria, at least one of which entails a victory of the bigger party. The smaller one may also win, provided its relative size is not too small; more self-delusional voters in the minority party decreases this threshold size. Our model is able to explain some empirical facts, such as the possibility that a close election leads to low turnout (Geys, 2006), a lower margin of victory when turnout is higher (Geys, 2006) and high turnout rates favoring the minority (Bernhagen & Marsh, 1997).