962 resultados para Vantagens específicas
Resumo:
This investigation focused on the development, test and validation of methodologies for mercury fractionation and speciation in soil and sediment. After an exhaustive review of the literature, several methods were chosen and tested in well characterised soil and sediment samples. Sequential extraction procedures that divide mercury fractions according to their mobility and potential availability in the environment were investigated. The efficiency of different solvents for fractionation of mercury was evaluated, as well as the adequacy of different analytical instruments for quantification of mercury in the extracts. Kinetic experiments to establish the equilibrium time for mercury release from soil or sediment were also performed. It was found that in the studied areas, only a very small percentage of mercury is present as mobile species and that mobility is associated to higher aluminium and manganese contents, and that high contents of organic matter and sulfur result in mercury tightly bound to the matrix. Sandy soils tend to release mercury faster that clayey soils, and therefore, texture of soil or sediment has a strong influence on the mobility of mercury. It was also understood that analytical techniques for quantification of mercury need to be further developed, with lower quantification limits, particularly for mercury quantification of less concentrated fractions: water-soluble e exchangeable. Although the results provided a better understanding of the distribution of mercury in the sample, the complexity of the procedure limits its applicability and robustness. A proficiency-testing scheme targeting total mercury determination in soil, sediment, fish and human hair was organised in order to evaluate the consistency of results obtained by different laboratories, applying their routine methods to the same test samples. Additionally, single extractions by 1 mol L-1 ammonium acetate solution, 0.1 mol L-1 HCl and 0.1 mol L-1 CaCl2, as well as extraction of the organometallic fraction were proposed for soil; the last was also suggested for sediment and fish. This study was important to update the knowledge on analytical techniques that are being used for mercury quantification, the associated problems and sources of error, and to improve and standardize mercury extraction techniques, as well as to implement effective strategies for quality control in mercury determination. A different, “non chemical-like” method for mercury species identification was developed, optimised and validated, based on the thermo-desorption of the different mercury species. Compared to conventional extraction procedures, this method has advantages: it requires little to no sample treatment; a complete identification of species present is obtained in less than two hours; mercury losses are almost neglectable; can be considered “clean”, as no residues are produced; the worldwide comparison of results obtained is easier and reliable, an important step towards the validation of the method. Therefore, the main deliverables of this PhD thesis are an improved knowledge on analytical procedures for identification and quantification of mercury species in soils and sediments, as well as a better understanding of the factors controlling the behaviour of mercury in these matrices.
Resumo:
For the actual existence of e-government it is necessary and crucial to provide public information and documentation, making its access simple to citizens. A portion, not necessarily small, of these documents is in an unstructured form and in natural language, and consequently outside of which the current search systems are generally able to cope and effectively handle. Thus, in thesis, it is possible to improve access to these contents using systems that process natural language and create structured information, particularly if supported in semantics. In order to put this thesis to test, this work was developed in three major phases: (1) design of a conceptual model integrating the creation of structured information and making it available to various actors, in line with the vision of e-government 2.0; (2) definition and development of a prototype instantiating the key modules of this conceptual model, including ontology based information extraction supported by examples of relevant information, knowledge management and access based on natural language; (3) assessment of the usability and acceptability of querying information as made possible by the prototype - and in consequence of the conceptual model - by users in a realistic scenario, that included comparison with existing forms of access. In addition to this evaluation, at another level more related to technology assessment and not to the model, evaluations were made on the performance of the subsystem responsible for information extraction. The evaluation results show that the proposed model was perceived as more effective and useful than the alternatives. Associated with the performance of the prototype to extract information from documents, comparable to the state of the art, results demonstrate the feasibility and advantages, with current technology, of using natural language processing and integration of semantic information to improve access to unstructured contents in natural language. The conceptual model and the prototype demonstrator intend to contribute to the future existence of more sophisticated search systems that are also more suitable for e-government. To have transparency in governance, active citizenship, greater agility in the interaction with the public administration, among others, it is necessary that citizens and businesses have quick and easy access to official information, even if it was originally created in natural language.
Resumo:
Bioprocesses use microorganisms or cells in order to produce and/or obtain some desired products. Nowadays these strategies appear as a fundamental alternative to the traditional chemical processes. Amongst the many advantages associated to their use in the chemical, oil or pharmaceutical industries, their low cost, easily scale-up and low environmental impact should be highlighted. This work reports two examples of bioprocesses as alternatives to traditional chemical processes used by the oil and pharmaceutical industries. In the first part of this work it was studied an example of a bioprocess based on the use of microorganisms in enhanced oil recovery. Currently, due to high costs of oil and its scarcity, the enhanced oil recovery techniques become very attractive. Between the available techniques the use of microbial enhanced oil recovery (MEOR) has been highlighted. This process is based on the stimulation of indigenous microorganisms or by the injection of microorganism consortia to produce specific metabolites and hence increase the amount of oil recovered. In the first chapters of this work the isolation of several microorganisms from samples of paraffinic Brazilian oils is described, and their tensioactive and biodegradability properties are presented. Furthermore, the chemical structures of the biosurfactants produced by those isolates were also characterized. In the final chapter of the first part, the capabilities of some isolated bacteria to enhance the oil recovery of paraffinic Brazilian oils entrapped in sand-pack columns were evaluated. In the second part of this work it was investigated aqueous two-phase systems or aqueous biphasic systems (ABS) as extractive strategies for antibiotics directly from the fermented broth in which they are produced. To this goal, several aqueous two-phase systems composed of ionic liquids (ILs) and polymers were studied for the first time and their phase diagrams were determined. The novel ATPS appear as effective and economic methods to extract different biomolecules or/and biological products. Thus, aiming the initial antibiotics extraction purpose it was studied the influence of a wide range of ILs and polymers in the aqueous two-phase formation ability, as well as their influence in the partitioning of several type-molecules, such as amino acids, alkaloids and dyes. As a final chapter it is presented the capacity of these novel systems to extract the antibiotic tetracycline directly from the fermented broth of Streptomyces aureofaciens.
Resumo:
Portugal é um dos países europeus com melhor cobertura espacial e populacional de rede de autoestradas (5º entre os 27 da UE). O acentuado crescimento desta rede nos últimos anos leva a que seja necessária a utilização de metodologias de análise e avaliação da qualidade do serviço que é prestado nestas infraestruturas, relativamente às condições de circulação. Usualmente, a avaliação da qualidade de serviço é efetuada por intermédio de metodologias internacionalmente aceites, das quais se destaca a preconizada no Highway Capacity Manual (HCM). É com esta última metodologia que são habitualmente determinados em Portugal, os níveis de serviço nas diversas componentes de uma autoestrada (secções correntes, ramos de ligação e segmentos de entrecruzamento). No entanto, a sua transposição direta para a realidade portuguesa levanta algumas reservas, uma vez que os elementos que compõem o ambiente rodoviário (infraestrutura, veículo e condutor) são distintos dos da realidade norte-americana para a qual foi desenvolvida. Assim, seria útil para os atores envolvidos no setor rodoviário dispor de metodologias desenvolvidas para as condições portuguesas, que possibilitassem uma caracterização mais realista da qualidade de serviço ao nível da operação em autoestradas. No entanto, importa referir que o desenvolvimento de metodologias deste género requer uma quantidade muito significativa de dados geométricos e de tráfego, o que acarreta uma enorme necessidade de meios, quer humanos, quer materiais. Esta abordagem é assim de difícil execução, sendo por isso necessário recorrer a metodologias alternativas para a persecução deste objetivo. Ultimamente tem-se verificado o uso cada vez mais generalizado de modelos de simulação microscópica de tráfego, que simulando o movimento individual dos veículos num ambiente virtual permitem realizar análises de tráfego. A presente dissertação apresenta os resultados obtidos no desenvolvimento de uma metodologia que procura recriar, através de simuladores microscópicos de tráfego, o comportamento das correntes de tráfego em secções correntes de autoestradas com o intuito de, posteriormente, se proceder à adaptação da metodologia preconizada no HCM (na sua edição de 2000) à realidade portuguesa. Para tal, com os simuladores microscópicos utilizados (AIMSUN e VISSIM) procurou-se reproduzir as condições de circulação numa autoestrada portuguesa, de modo a que fosse possível analisar as alterações sofridas no comportamento das correntes de tráfego após a modificação dos principais fatores geométricos e de tráfego envolvidos na metodologia do HCM 2000. Para o efeito, realizou-se uma análise de sensibilidade aos simuladores de forma a avaliar a sua capacidade para representar a influência desses fatores, com vista a, numa fase posterior, se quantificar o seu efeito para a realidade nacional e dessa forma se proceder à adequação da referida metodologia ao contexto português. Em resumo, o presente trabalho apresenta as principais vantagens e limitações dos microssimuladores AIMSUN e VISSIM na modelação do tráfego de uma autoestrada portuguesa, tendo-se concluído que estes simuladores não são capazes de representar de forma explícita alguns dos fatores considerados na metodologia do HCM 2000, o que impossibilita a sua utilização como ferramenta de quantificação dos seus efeitos e consequentemente inviabiliza a adaptação dessa metodologia à realidade nacional. São, no entanto, referidas algumas indicações de como essas limitações poderão vir a ser ultrapassadas, com vista à consecução futura dessa adequação.
Resumo:
A Educação para o Desenvolvimento Sustentável (EDS) é o cerne de um paradigma emergente na educação do século XXI. A EDS constitui-se um processo de aprendizagem holístico e sistémico e tem como função ensinar a viver de maneira sustentável. Apresenta-se como uma abordagem pedagógica inovadora, que combina aprendizagens ativas e participativas, suportadas por uma multiplicidade de estratégias didático-pedagógicas. Objetiva a promoção da capacidade de pensamento crítico, da resolução de problemas e da tomada de decisão, baseada em valores, por parte dos alunos. Para a implementação da EDS é fundamental que os professores tenham consciência de que lidar com as questões da sustentabilidade, na sala de aula, implica dotarem-se de competências específicas. É, portanto, necessário investir na formação de educadores e formadores; o que compreende o seu desenvolvimento profissional, focado no aperfeiçoamento das suas competências, de modo a potenciar novos processos na aprendizagem coerentes com os princípios da EDS. Neste contexto, no presente estudo, foi criada uma Oficina de Formação para professores do ensino básico, na modalidade b-learning, visando a criação de um espaço de formação que permitisse a integração das TIC/Web 2.0 na prática docente, mais concretamente no apoio à inclusão da EDS no currículo. Partindo do pressuposto que as TIC/Web 2.0 são ferramentas que nos oferecem novas oportunidades, pela sua versatilidade de disseminação do conhecimento, e que permitem reorientar o ensino e a aprendizagem sustentados na teoria sócio-construtivista, promovendo o trabalho colaborativo, criou-se uma Comunidade de Prática online. Recorreu-se, para o efeito, a uma plataforma de alojamento de redes sociais virtuais, o Grouply; visando o estabelecimento de interações entre os professores, a partilha de experiências, recursos e conhecimento, indutores da (re)configuração de práticas ao nível da integração das ferramentas da Web 2.0 no contexto da EDS e, ainda, objetivando promover a atualização, o aperfeiçoamento e a aquisição de novas competências pedagógicas contribuindo para o seu desenvolvimento profissional e social. Metodologicamente o presente estudo assumiu uma natureza qualitativa, segundo um design de investigação-ação, o que implicou um plano de ação realizado numa espiral de três ciclos de investigação-ação: recurso a diferentes técnicas e instrumentos de recolha de dados, particularmente o inquérito por questionário e entrevista, realizados aos professores que frequentaram a oficina de formação; observação com base no diário da Investigadora com os registos de observação das sessões de grupo, reflexões da Investigadora/Formadora e das sessões de acompanhamento individual (Supervisão pedagógica), realizadas ao longo da referida oficina; análise documental dos e-portefolios com registos das reflexões individuais de cada uma das sessões da oficina, as reflexões finais dos professores e o registo dos post´s no fórum de discussão, blogs e Whiteboard da Comunidade de Prática online. Decorrente da análise e discussão dos resultados obtidos, o trabalho realizado sugere que os professores adquiriram/desenvolveram competências em EDS e digitais, tendo-se verificado que a oficina de formação contribui para algumas mudanças nas práticas dos professores.
Resumo:
Esta tese consiste em uma pesquisa empírica sobre a expressividade musical. Os principais objetivos foram: identificar padrões e estratégias para o aprimoramento expressivo aplicados por profissionais de excelência; identificar como os performers conceitualizam a expressividade; identificar os principais componentes da expressividade; verificar relações entre a expressividade e o momento da performance; elaborar estratégias de estudo a partir das informações obtidas e aplicar estas estratégias na prática; verificar a sua pertinência e realizar uma avaliação qualitativa; e elaborar um modelo prático de estudo para a expressividade. Foram realizadas entrevistas com pianistas profissionais a fim de obter dados consistentes acerca da expressividade, e posteriormente foi conduzido um estudo de caso no qual estratégias de estudo sistematizadas a partir do relato dos pianistas foram aplicadas em um trabalho apoiado na autoetnografia, a fim de verificar a pertinência e realizar uma avaliação qualitativa sobre tais estratégias. Em termos gerais, os resultados indicaram que a expressividade consiste em um fenômeno de comunicação influenciado pela capacidade do intérprete em transmitir a mensagem e estrutura musical, cujos principais componentes compreendem elementos como o caráter, articulação e o fraseado. Além disso, os performers geralmente relacionam a expressividade a "modas" e tendências estético-interpretativas específicas e a elementos extramusicais. As principais estratégias pertinentes ao aprimoramento expressivo relacionam-se ao fraseado, realização de contrastes e sonoridade. A priorizar estes elementos, foi construído um modelo de estudo.
Resumo:
O objetivo deste estudo, que resultou de um estágio realizado na Associação Industrial do Distrito de Aveiro (AIDA), foi perceber qual o papel das associações empresariais e industriais para o tecido empresarial desta mesma região. Neste sentido, procurou-se perceber: quais os serviços procurados pelas empresas nas suas associações, as razões que motivam essa procura e o grau de satisfação com os serviços prestados; quais as vantagens e desvantagens de pertencer a uma associação empresarial/industrial; e quais as mudanças percecionadas como necessárias nas associações para a melhoria da qualidade dos serviços prestados, tornando-as, deste modo, mais relevantes para o tecido empresarial. Com vista à concretização do objetivo proposto foi utilizada uma metodologia qualitativa, tendo a informação sido recolhida através de análise documental, de observação participante e de entrevistas realizadas à Diretora-Geral da AIDA e a oito empresas pertencentes a associações empresariais e industriais. O presente estudo começa por apresentar a evolução da indústria portuguesa e do associativismo industrial ao longo do tempo, focando as alterações mais significativas ocorridas e referindo as funções desempenhadas pelas mesmas associações. Posteriormente, é analisado a Entidade de Acolhimento – a AIDA –, e relatadas as atividades realizadas ao longo do estágio nessa instituição. Seguidamente, e após a explicação da metodologia utilizada, é apresentada a análise das entrevistas realizadas a empresários. Finalmente, são tiradas as conclusões finais e delineadas perspetivas para o futuro. Este estudo permitiu reconhecer o papel importante que as associações empresarias/indústrias desempenham para as empresas e para as regiões onde estas se inserem. Porém, existem ainda mudanças a realizar, de forma a potenciar o seu trabalho e ir de encontro aos anseios e às necessidades das empresas.
Resumo:
Viscoelastic treatments are one of the most efficient treatments, as far as passive damping is concerned, particularly in the case of thin and light structures. In this type of treatment, part of the strain energy generated in the viscoelastic material is dissipated to the surroundings, in the form of heat. A layer of viscoelastic material is applied to a structure in an unconstrained or constrained configuration, the latter proving to be the most efficient arrangement. This is due to the fact that the relative movement of both the host and constraining layers cause the viscoelastic material to be subjected to a relatively high strain energy. There are studies, however, that claim that the partial application of the viscoelastic material is just as efficient, in terms of economic costs or any other form of treatment application costs. The application of patches of material in specific and selected areas of the structure, thus minimising the extension of damping material, results in an equally efficient treatment. Since the damping mechanism of a viscoelastic material is based on the dissipation of part of the strain energy, the efficiency of the partial treatment can be correlated to the modal strain energy of the structure. Even though the results obtained with this approach in various studies are considered very satisfactory, an optimisation procedure is deemed necessary. In order to obtain optimum solutions, however, time consuming numerical simulations are required. The optimisation process to use the minimum amount of viscoelastic material is based on an evolutionary geometry re-design and calculation of the modal damping, making this procedure computationally costly. To avert this disadvantage, this study uses adaptive layerwise finite elements and applies Genetic Algorithms in the optimisation process.
Resumo:
The development of a new instrument for the measurement of convective and radiative is proposed, based on the transient operation of a transpiration radiometer. Current transpiration radiometers rely on steady state temperature measurements in a porous element crossed by a know gas mass flow. As a consequence of the porous sensing element’s intrinsically high thermal inertia, the instrument’s time constant is in the order of several seconds. The proposed instrument preserves established advantages of transpiration radiometers while incorporating additional features that broaden its applicability range. The most important developments are a significant reduction of the instrument’s response time and the possibility of separating and measuring the convective and radiative components of the heat flux. These objectives are achieved through the analysis of the instrument’s transient response, a pulsed gas flow being used to induce the transient behavior.
Resumo:
Online travel shopping has attracted researchers due to its significant growth and there is a growing body of literature in this field. However, research on what drives consumers to purchase travel online has typically been fragmented. In fact, existing studies have largely concentrated on examining consumers’ online travel purchases either grounded on Davis’s Technology Acceptance Model, on the Theory of Reasoned Action and its extension, the Theory of Planned Behaviour or on Roger’s model of perceived innovation attributes, the Innovation Diffusion Theory. A thorough literature review has revealed that there is a lack of studies that integrate all theories to better understand online travel shopping. Therefore, based on relevant literature in tourism and consumer behaviour, this study proposes and tests an integrated model to explore which factors affect intentions to purchase travel online. Furthermore, it proposes a new construct, termed social media involvement, defined as a person’s level of interest or emotional attachment with social media, and examines its relationship with intentions to purchase travel online. To test the 18 hypotheses, a quantitative approach was followed by first collecting data through an online survey. With a sample of 1,532 Worldwide Internet users, Partial Least Squares analysis was than conducted to assess the validity and reliability of the data and empirically test the hypothesized relationships between the constructs. The results indicate that intentions to purchase travel online is mostly determined by attitude towards online shopping, which is influenced by perceived relative advantages of online travel shopping and trust in online travel shopping. In addition, the findings indicate that the second most important predictor of intentions to purchase travel online is compatibility, an attribute from the Innovation Diffusion Theory. Furthermore, even though online shopping is nowadays a common practice, perceived risk continues to negatively affect intentions to purchase travel online. The most surprising finding of this study was that Internet users more involved with social media for travel purposes did not have higher intentions to purchase travel online. The theoretical contributions of this study and the practical implications are discussed and future research directions are detailed.
Resumo:
The expectations of citizens from the Information Technologies (ITs) are increasing as the ITs have become integral part of our society, serving all kinds of activities whether professional, leisure, safety-critical applications or business. Hence, the limitations of the traditional network designs to provide innovative and enhanced services and applications motivated a consensus to integrate all services over packet switching infrastructures, using the Internet Protocol, so as to leverage flexible control and economical benefits in the Next Generation Networks (NGNs). However, the Internet is not capable of treating services differently while each service has its own requirements (e.g., Quality of Service - QoS). Therefore, the need for more evolved forms of communications has driven to radical changes of architectural and layering designs which demand appropriate solutions for service admission and network resources control. This Thesis addresses QoS and network control issues, aiming to improve overall control performance in current and future networks which classify services into classes. The Thesis is divided into three parts. In the first part, we propose two resource over-reservation algorithms, a Class-based bandwidth Over-Reservation (COR) and an Enhanced COR (ECOR). The over-reservation means reserving more bandwidth than a Class of Service (CoS) needs, so the QoS reservation signalling rate is reduced. COR and ECOR allow for dynamically defining over-reservation parameters for CoSs based on network interfaces resource conditions; they aim to reduce QoS signalling and related overhead without incurring CoS starvation or waste of bandwidth. ECOR differs from COR by allowing for optimizing control overhead minimization. Further, we propose a centralized control mechanism called Advanced Centralization Architecture (ACA), that uses a single state-full Control Decision Point (CDP) which maintains a good view of its underlying network topology and the related links resource statistics on real-time basis to control the overall network. It is very important to mention that, in this Thesis, we use multicast trees as the basis for session transport, not only for group communication purposes, but mainly to pin packets of a session mapped to a tree to follow the desired tree. Our simulation results prove a drastic reduction of QoS control signalling and the related overhead without QoS violation or waste of resources. Besides, we provide a generic-purpose analytical model to assess the impact of various parameters (e.g., link capacity, session dynamics, etc.) that generally challenge resource overprovisioning control. In the second part of this Thesis, we propose a decentralization control mechanism called Advanced Class-based resource OverpRovisioning (ACOR), that aims to achieve better scalability than the ACA approach. ACOR enables multiple CDPs, distributed at network edge, to cooperate and exchange appropriate control data (e.g., trees and bandwidth usage information) such that each CDP is able to maintain a good knowledge of the network topology and the related links resource statistics on real-time basis. From scalability perspective, ACOR cooperation is selective, meaning that control information is exchanged dynamically among only the CDPs which are concerned (correlated). Moreover, the synchronization is carried out through our proposed concept of Virtual Over-Provisioned Resource (VOPR), which is a share of over-reservations of each interface to each tree that uses the interface. Thus, each CDP can process several session requests over a tree without requiring synchronization between the correlated CDPs as long as the VOPR of the tree is not exhausted. Analytical and simulation results demonstrate that aggregate over-reservation control in decentralized scenarios keep low signalling without QoS violations or waste of resources. We also introduced a control signalling protocol called ACOR Protocol (ACOR-P) to support the centralization and decentralization designs in this Thesis. Further, we propose an Extended ACOR (E-ACOR) which aggregates the VOPR of all trees that originate at the same CDP, and more session requests can be processed without synchronization when compared with ACOR. In addition, E-ACOR introduces a mechanism to efficiently track network congestion information to prevent unnecessary synchronization during congestion time when VOPRs would exhaust upon every session request. The performance evaluation through analytical and simulation results proves the superiority of E-ACOR in minimizing overall control signalling overhead while keeping all advantages of ACOR, that is, without incurring QoS violations or waste of resources. The last part of this Thesis includes the Survivable ACOR (SACOR) proposal to support stable operations of the QoS and network control mechanisms in case of failures and recoveries (e.g., of links and nodes). The performance results show flexible survivability characterized by fast convergence time and differentiation of traffic re-routing under efficient resource utilization i.e. without wasting bandwidth. In summary, the QoS and architectural control mechanisms proposed in this Thesis provide efficient and scalable support for network control key sub-systems (e.g., QoS and resource control, traffic engineering, multicasting, etc.), and thus allow for optimizing network overall control performance.
Resumo:
The promise of a truly mobile experience is to have the freedom to roam around anywhere and not be bound to a single location. However, the energy required to keep mobile devices connected to the network over extended periods of time quickly dissipates. In fact, energy is a critical resource in the design of wireless networks since wireless devices are usually powered by batteries. Furthermore, multi-standard mobile devices are allowing users to enjoy higher data rates with ubiquitous connectivity. However, the bene ts gained from multiple interfaces come at a cost in terms of energy consumption having profound e ect on the mobile battery lifetime and standby time. This concern is rea rmed by the fact that battery lifetime is one of the top reasons why consumers are deterred from using advanced multimedia services on their mobile on a frequent basis. In order to secure market penetration for next generation services energy e ciency needs to be placed at the forefront of system design. However, despite recent e orts, energy compliant features in legacy technologies are still in its infancy, and new disruptive architectures coupled with interdisciplinary design approaches are required in order to not only promote the energy gain within a single protocol layer, but to enhance the energy gain from a holistic perspective. A promising approach is cooperative smart systems, that in addition to exploiting context information, are entities that are able to form a coalition and cooperate in order to achieve a common goal. Migrating from this baseline, this thesis investigates how these technology paradigm can be applied towards reducing the energy consumption in mobile networks. In addition, we introduce an additional energy saving dimension by adopting an interlayer design so that protocol layers are designed to work in synergy with the host system, rather than independently, for harnessing energy. In this work, we exploit context information, cooperation and inter-layer design for developing new energy e cient and technology agnostic building blocks for mobile networks. These technology enablers include energy e cient node discovery and short-range cooperation for energy saving in mobile handsets, complemented by energy-aware smart scheduling for promoting energy saving on the network side. Analytical and simulations results were obtained, and veri ed in the lab on a real hardware testbed. Results have shown that up to 50% energy saving could be obtained.
Resumo:
“Branch-and-cut” algorithm is one of the most efficient exact approaches to solve mixed integer programs. This algorithm combines the advantages of a pure branch-and-bound approach and cutting planes scheme. Branch-and-cut algorithm computes the linear programming relaxation of the problem at each node of the search tree which is improved by the use of cuts, i.e. by the inclusion of valid inequalities. It should be taken into account that selection of strongest cuts is crucial for their effective use in branch-and-cut algorithm. In this thesis, we focus on the derivation and use of cutting planes to solve general mixed integer problems, and in particular inventory problems combined with other problems such as distribution, supplier selection, vehicle routing, etc. In order to achieve this goal, we first consider substructures (relaxations) of such problems which are obtained by the coherent loss of information. The polyhedral structure of those simpler mixed integer sets is studied to derive strong valid inequalities. Finally those strong inequalities are included in the cutting plane algorithms to solve the general mixed integer problems. We study three mixed integer sets in this dissertation. The first two mixed integer sets arise as a subproblem of the lot-sizing with supplier selection, the network design and the vendor-managed inventory routing problems. These sets are variants of the well-known single node fixed-charge network set where a binary or integer variable is associated with the node. The third set occurs as a subproblem of mixed integer sets where incompatibility between binary variables is considered. We generate families of valid inequalities for those sets, identify classes of facet-defining inequalities, and discuss the separation problems associated with the inequalities. Then cutting plane frameworks are implemented to solve some mixed integer programs. Preliminary computational experiments are presented in this direction.
Resumo:
The performance of real-time networks is under continuous improvement as a result of several trends in the digital world. However, these tendencies not only cause improvements, but also exacerbates a series of unideal aspects of real-time networks such as communication latency, jitter of the latency and packet drop rate. This Thesis focuses on the communication errors that appear on such realtime networks, from the point-of-view of automatic control. Specifically, it investigates the effects of packet drops in automatic control over fieldbuses, as well as the architectures and optimal techniques for their compensation. Firstly, a new approach to address the problems that rise in virtue of such packet drops, is proposed. This novel approach is based on the simultaneous transmission of several values in a single message. Such messages can be from sensor to controller, in which case they are comprised of several past sensor readings, or from controller to actuator in which case they are comprised of estimates of several future control values. A series of tests reveal the advantages of this approach. The above-explained approach is then expanded as to accommodate the techniques of contemporary optimal control. However, unlike the aforementioned approach, that deliberately does not send certain messages in order to make a more efficient use of network resources; in the second case, the techniques are used to reduce the effects of packet losses. After these two approaches that are based on data aggregation, it is also studied the optimal control in packet dropping fieldbuses, using generalized actuator output functions. This study ends with the development of a new optimal controller, as well as the function, among the generalized functions that dictate the actuator’s behaviour in the absence of a new control message, that leads to the optimal performance. The Thesis also presents a different line of research, related with the output oscillations that take place as a consequence of the use of classic co-design techniques of networked control. The proposed algorithm has the goal of allowing the execution of such classical co-design algorithms without causing an output oscillation that increases the value of the cost function. Such increases may, under certain circumstances, negate the advantages of the application of the classical co-design techniques. A yet another line of research, investigated algorithms, more efficient than contemporary ones, to generate task execution sequences that guarantee that at least a given number of activated jobs will be executed out of every set composed by a predetermined number of contiguous activations. This algorithm may, in the future, be applied to the generation of message transmission patterns in the above-mentioned techniques for the efficient use of network resources. The proposed task generation algorithm is better than its predecessors in the sense that it is capable of scheduling systems that cannot be scheduled by its predecessor algorithms. The Thesis also presents a mechanism that allows to perform multi-path routing in wireless sensor networks, while ensuring that no value will be counted in duplicate. Thereby, this technique improves the performance of wireless sensor networks, rendering them more suitable for control applications. As mentioned before, this Thesis is centered around techniques for the improvement of performance of distributed control systems in which several elements are connected through a fieldbus that may be subject to packet drops. The first three approaches are directly related to this topic, with the first two approaching the problem from an architectural standpoint, whereas the third one does so from more theoretical grounds. The fourth approach ensures that the approaches to this and similar problems that can be found in the literature that try to achieve goals similar to objectives of this Thesis, can do so without causing other problems that may invalidate the solutions in question. Then, the thesis presents an approach to the problem dealt with in it, which is centered in the efficient generation of the transmission patterns that are used in the aforementioned approaches.
Resumo:
This thesis focuses on the processes of narrative change in psychotherapy. Previous reviews of the processes of narrative change in psychotherapy concluded that a general theory that details narrative concepts appropriate to understand psychotherapy processes, explains the dynamic processes between narratives, and how they relate to positive outcomes is needed. This thesis addresses this issue by suggesting a multi-layered model that accounts for transformations in different layers of narrative organization. Accordingly, a model was specified that considers three layers of narrative organization: a micro-layer of narrative innovations that disrupt the clients’ usual way of construct meaning from life situations (innovative moments), a meso-layer of narrative scripts that integrate these narrative innovations in narrative scripts that consolidate its transformative potential (protonarratives), and, finally, a macro-layer of clients’ life story (self-narrative). Globally, the empirical studies provided support for the conceptual plausibility of this model and to the specific hypothesis that were formulated on its basis. Our observations complement previous research that had underlined the integrative processes either by emphasizing thematic coherence or integration, by emphasizing the role of dynamicity and differentiation of narrative contents and processes. Additionally, they also contribute to expand previous accounts of narrative innovation through insights on the processes that characterize narrative innovation development across psychotherapy. These studies also emphasize the role of quantitative procedures in the study of narrative processes of change as they allow us to accommodate the complexity and dynamic properties of narrative processes.