959 resultados para Digital applications (APPs)
Resumo:
Resumo:
This dissertation demonstrates an explanation of damage and reliability of critical components and structures within the second law of thermodynamics. The approach relies on the fundamentals of irreversible thermodynamics, specifically the concept of entropy generation due to materials degradation as an index of damage. All failure mechanisms that cause degradation, damage accumulation and ultimate failure share a common feature, namely energy dissipation. Energy dissipation, as a fundamental measure for irreversibility in a thermodynamic treatment of non-equilibrium processes, leads to and can be expressed in terms of entropy generation. The dissertation proposes a theory of damage by relating entropy generation to energy dissipation via generalized thermodynamic forces and thermodynamic fluxes that formally describes the resulting damage. Following the proposed theory of entropic damage, an approach to reliability and integrity characterization based on thermodynamic entropy is discussed. It is shown that the variability in the amount of the thermodynamic-based damage and uncertainties about the parameters of a distribution model describing the variability, leads to a more consistent and broader definition of the well know time-to-failure distribution in reliability engineering. As such it has been shown that the reliability function can be derived from the thermodynamic laws rather than estimated from the observed failure histories. Furthermore, using the superior advantages of the use of entropy generation and accumulation as a damage index in comparison to common observable markers of damage such as crack size, a method is proposed to explain the prognostics and health management (PHM) in terms of the entropic damage. The proposed entropic-based damage theory to reliability and integrity is then demonstrated through experimental validation. Using this theorem, the corrosion-fatigue entropy generation function is derived, evaluated and employed for structural integrity, reliability assessment and remaining useful life (RUL) prediction of Aluminum 7075-T651 specimens tested.
Resumo:
Causal inference with a continuous treatment is a relatively under-explored problem. In this dissertation, we adopt the potential outcomes framework. Potential outcomes are responses that would be seen for a unit under all possible treatments. In an observational study where the treatment is continuous, the potential outcomes are an uncountably infinite set indexed by treatment dose. We parameterize this unobservable set as a linear combination of a finite number of basis functions whose coefficients vary across units. This leads to new techniques for estimating the population average dose-response function (ADRF). Some techniques require a model for the treatment assignment given covariates, some require a model for predicting the potential outcomes from covariates, and some require both. We develop these techniques using a framework of estimating functions, compare them to existing methods for continuous treatments, and simulate their performance in a population where the ADRF is linear and the models for the treatment and/or outcomes may be misspecified. We also extend the comparisons to a data set of lottery winners in Massachusetts. Next, we describe the methods and functions in the R package causaldrf using data from the National Medical Expenditure Survey (NMES) and Infant Health and Development Program (IHDP) as examples. Additionally, we analyze the National Growth and Health Study (NGHS) data set and deal with the issue of missing data. Lastly, we discuss future research goals and possible extensions.
Resumo:
The aim of this dissertation was to investigate flexible polymer-nanoparticle composites with unique magnetic and electrical properties. Toward this goal, two distinct projects were carried out. The first project explored the magneto-dielectric properties and morphology of flexible polymer-nanoparticle composites that possess high permeability (µ), high permittivity (ε) and minimal dielectric, and magnetic loss (tan δε, tan δµ). The main materials challenges were the synthesis of magnetic nanoparticle fillers displaying high saturation magnetization (Ms), limited coercivity, and their homogeneous dispersion in a polymeric matrix. Nanostructured magnetic fillers including polycrystalline iron core-shell nanoparticles, and constructively assembled superparamagnetic iron oxide nanoparticles were synthesized, and dispersed uniformly in an elastomer matrix to minimize conductive losses. The resulting composites have demonstrated promising permittivity (22.3), permeability (3), and sustained low dielectric (0.1), magnetic (0.4) loss for frequencies below 2 GHz. This study demonstrated nanocomposites with tunable magnetic resonance frequency, which can be used to develop compact and flexible radio frequency devices with high efficiency. The second project focused on fundamental research regarding methods for the design of highly conductive polymer-nanoparticle composites that can maintain high electrical conductivity under tensile strain exceeding 100%. We investigated a simple solution spraying method to fabricate stretchable conductors based on elastomeric block copolymer fibers and silver nanoparticles. Silver nanoparticles were assembled both in and around block copolymer fibers forming interconnected dual nanoparticle networks, resulting in both in-fiber conductive pathways and additional conductive pathways on the outer surface of the fibers. Stretchable composites with conductivity values reaching 9000 S/cm maintained 56% of their initial conductivity after 500 cycles at 100% strain. The developed manufacturing method in this research could pave the way towards direct deposition of flexible electronic devices on any shaped substrate. The electrical and electromechanical properties of these dual silver nanoparticle network composites make them promising materials for the future construction of stretchable circuitry for displays, solar cells, antennas, and strain and tactility sensors.
Resumo:
Resource allocation decisions are made to serve the current emergency without knowing which future emergency will be occurring. Different ordered combinations of emergencies result in different performance outcomes. Even though future decisions can be anticipated with scenarios, previous models follow an assumption that events over a time interval are independent. This dissertation follows an assumption that events are interdependent, because speed reduction and rubbernecking due to an initial incident provoke secondary incidents. The misconception that secondary incidents are not common has resulted in overlooking a look-ahead concept. This dissertation is a pioneer in relaxing the structural assumptions of independency during the assignment of emergency vehicles. When an emergency is detected and a request arrives, an appropriate emergency vehicle is immediately dispatched. We provide tools for quantifying impacts based on fundamentals of incident occurrences through identification, prediction, and interpretation of secondary incidents. A proposed online dispatching model minimizes the cost of moving the next emergency unit, while making the response as close to optimal as possible. Using the look-ahead concept, the online model flexibly re-computes the solution, basing future decisions on present requests. We introduce various online dispatching strategies with visualization of the algorithms, and provide insights on their differences in behavior and solution quality. The experimental evidence indicates that the algorithm works well in practice. After having served a designated request, the available and/or remaining vehicles are relocated to a new base for the next emergency. System costs will be excessive if delay regarding dispatching decisions is ignored when relocating response units. This dissertation presents an integrated method with a principle of beginning with a location phase to manage initial incidents and progressing through a dispatching phase to manage the stochastic occurrence of next incidents. Previous studies used the frequency of independent incidents and ignored scenarios in which two incidents occurred within proximal regions and intervals. The proposed analytical model relaxes the structural assumptions of Poisson process (independent increments) and incorporates evolution of primary and secondary incident probabilities over time. The mathematical model overcomes several limiting assumptions of the previous models, such as no waiting-time, returning rule to original depot, and fixed depot. The temporal locations flexible with look-ahead are compared with current practice that locates units in depots based on Poisson theory. A linearization of the formulation is presented and an efficient heuristic algorithm is implemented to deal with a large-scale problem in real-time.
Resumo:
Many applications, including communications, test and measurement, and radar, require the generation of signals with a high degree of spectral purity. One method for producing tunable, low-noise source signals is to combine the outputs of multiple direct digital synthesizers (DDSs) arranged in a parallel configuration. In such an approach, if all noise is uncorrelated across channels, the noise will decrease relative to the combined signal power, resulting in a reduction of sideband noise and an increase in SNR. However, in any real array, the broadband noise and spurious components will be correlated to some degree, limiting the gains achieved by parallelization. This thesis examines the potential performance benefits that may arise from using an array of DDSs, with a focus on several types of common DDS errors, including phase noise, phase truncation spurs, quantization noise spurs, and quantizer nonlinearity spurs. Measurements to determine the level of correlation among DDS channels were made on a custom 14-channel DDS testbed. The investigation of the phase noise of a DDS array indicates that the contribution to the phase noise from the DACs can be decreased to a desired level by using a large enough number of channels. In such a system, the phase noise qualities of the source clock and the system cost and complexity will be the main limitations on the phase noise of the DDS array. The study of phase truncation spurs suggests that, at least in our system, the phase truncation spurs are uncorrelated, contrary to the theoretical prediction. We believe this decorrelation is due to the existence of an unidentified mechanism in our DDS array that is unaccounted for in our current operational DDS model. This mechanism, likely due to some timing element in the FPGA, causes some randomness in the relative phases of the truncation spurs from channel to channel each time the DDS array is powered up. This randomness decorrelates the phase truncation spurs, opening the potential for SFDR gain from using a DDS array. The analysis of the correlation of quantization noise spurs in an array of DDSs shows that the total quantization noise power of each DDS channel is uncorrelated for nearly all values of DAC output bits. This suggests that a near N gain in SQNR is possible for an N-channel array of DDSs. This gain will be most apparent for low-bit DACs in which quantization noise is notably higher than the thermal noise contribution. Lastly, the measurements of the correlation of quantizer nonlinearity spurs demonstrate that the second and third harmonics are highly correlated across channels for all frequencies tested. This means that there is no benefit to using an array of DDSs for the problems of in-band quantizer nonlinearities. As a result, alternate methods of harmonic spur management must be employed.
Resumo:
This thesis deals with tensor completion for the solution of multidimensional inverse problems. We study the problem of reconstructing an approximately low rank tensor from a small number of noisy linear measurements. New recovery guarantees, numerical algorithms, non-uniform sampling strategies, and parameter selection algorithms are developed. We derive a fixed point continuation algorithm for tensor completion and prove its convergence. A restricted isometry property (RIP) based tensor recovery guarantee is proved. Probabilistic recovery guarantees are obtained for sub-Gaussian measurement operators and for measurements obtained by non-uniform sampling from a Parseval tight frame. We show how tensor completion can be used to solve multidimensional inverse problems arising in NMR relaxometry. Algorithms are developed for regularization parameter selection, including accelerated k-fold cross-validation and generalized cross-validation. These methods are validated on experimental and simulated data. We also derive condition number estimates for nonnegative least squares problems. Tensor recovery promises to significantly accelerate N-dimensional NMR relaxometry and related experiments, enabling previously impractical experiments. Our methods could also be applied to other inverse problems arising in machine learning, image processing, signal processing, computer vision, and other fields.
Resumo:
This dissertation proposes statistical methods to formulate, estimate and apply complex transportation models. Two main problems are part of the analyses conducted and presented in this dissertation. The first method solves an econometric problem and is concerned with the joint estimation of models that contain both discrete and continuous decision variables. The use of ordered models along with a regression is proposed and their effectiveness is evaluated with respect to unordered models. Procedure to calculate and optimize the log-likelihood functions of both discrete-continuous approaches are derived, and difficulties associated with the estimation of unordered models explained. Numerical approximation methods based on the Genz algortithm are implemented in order to solve the multidimensional integral associated with the unordered modeling structure. The problems deriving from the lack of smoothness of the probit model around the maximum of the log-likelihood function, which makes the optimization and the calculation of standard deviations very difficult, are carefully analyzed. A methodology to perform out-of-sample validation in the context of a joint model is proposed. Comprehensive numerical experiments have been conducted on both simulated and real data. In particular, the discrete-continuous models are estimated and applied to vehicle ownership and use models on data extracted from the 2009 National Household Travel Survey. The second part of this work offers a comprehensive statistical analysis of free-flow speed distribution; the method is applied to data collected on a sample of roads in Italy. A linear mixed model that includes speed quantiles in its predictors is estimated. Results show that there is no road effect in the analysis of free-flow speeds, which is particularly important for model transferability. A very general framework to predict random effects with few observations and incomplete access to model covariates is formulated and applied to predict the distribution of free-flow speed quantiles. The speed distribution of most road sections is successfully predicted; jack-knife estimates are calculated and used to explain why some sections are poorly predicted. Eventually, this work contributes to the literature in transportation modeling by proposing econometric model formulations for discrete-continuous variables, more efficient methods for the calculation of multivariate normal probabilities, and random effects models for free-flow speed estimation that takes into account the survey design. All methods are rigorously validated on both real and simulated data.
Resumo:
Life Cycle Climate Performance (LCCP) is an evaluation method by which heating, ventilation, air conditioning and refrigeration systems can be evaluated for their global warming impact over the course of their complete life cycle. LCCP is more inclusive than previous metrics such as Total Equivalent Warming Impact. It is calculated as the sum of direct and indirect emissions generated over the lifetime of the system “from cradle to grave”. Direct emissions include all effects from the release of refrigerants into the atmosphere during the lifetime of the system. This includes annual leakage and losses during the disposal of the unit. The indirect emissions include emissions from the energy consumption during manufacturing process, lifetime operation, and disposal of the system. This thesis proposes a standardized approach to the use of LCCP and traceable data sources for all aspects of the calculation. An equation is proposed that unifies the efforts of previous researchers. Data sources are recommended for average values for all LCCP inputs. A residential heat pump sample problem is presented illustrating the methodology. The heat pump is evaluated at five U.S. locations in different climate zones. An excel tool was developed for residential heat pumps using the proposed method. The primary factor in the LCCP calculation is the energy consumption of the system. The effects of advanced vapor compression cycles are then investigated for heat pump applications. Advanced cycle options attempt to reduce the energy consumption in various ways. There are three categories of advanced cycle options: subcooling cycles, expansion loss recovery cycles and multi-stage cycles. The cycles selected for research are the suction line heat exchanger cycle, the expander cycle, the ejector cycle, and the vapor injection cycle. The cycles are modeled using Engineering Equation Solver and the results are applied to the LCCP methodology. The expander cycle, ejector cycle and vapor injection cycle are effective in reducing LCCP of a residential heat pump by 5.6%, 8.2% and 10.5%, respectively in Phoenix, AZ. The advanced cycles are evaluated with the use of low GWP refrigerants and are capable of reducing the LCCP of a residential heat by 13.7%, 16.3% and 18.6% using a refrigerant with a GWP of 10. To meet the U.S. Department of Energy’s goal of reducing residential energy use by 40% by 2025 with a proportional reduction in all other categories of residential energy consumption, a reduction in the energy consumption of a residential heat pump of 34.8% with a refrigerant GWP of 10 for Phoenix, AZ is necessary. A combination of advanced cycle, control options and low GWP refrigerants are necessary to meet this goal.
Resumo:
In this dissertation I draw a connection between quantum adiabatic optimization, spectral graph theory, heat-diffusion, and sub-stochastic processes through the operators that govern these processes and their associated spectra. In particular, we study Hamiltonians which have recently become known as ``stoquastic'' or, equivalently, the generators of sub-stochastic processes. The operators corresponding to these Hamiltonians are of interest in all of the settings mentioned above. I predominantly explore the connection between the spectral gap of an operator, or the difference between the two lowest energies of that operator, and certain equilibrium behavior. In the context of adiabatic optimization, this corresponds to the likelihood of solving the optimization problem of interest. I will provide an instance of an optimization problem that is easy to solve classically, but leaves open the possibility to being difficult adiabatically. Aside from this concrete example, the work in this dissertation is predominantly mathematical and we focus on bounding the spectral gap. Our primary tool for doing this is spectral graph theory, which provides the most natural approach to this task by simply considering Dirichlet eigenvalues of subgraphs of host graphs. I will derive tight bounds for the gap of one-dimensional, hypercube, and general convex subgraphs. The techniques used will also adapt methods recently used by Andrews and Clutterbuck to prove the long-standing ``Fundamental Gap Conjecture''.
Resumo:
Este documento descreve o trabalho realizado em conjunto com a empresa MedSUPPORT[1] no desenvolvimento de uma plataforma digital para análise da satisfação dos utentes de unidades de saúde. Atualmente a avaliação de satisfação junto dos seus clientes é um procedimento importante e que deve ser utilizado pelas empresas como mais uma ferramenta de avaliação dos seus produtos ou serviços. Para as unidades de saúde a avaliação da satisfação do utente é atualmente considerada como um objetivo fundamental dos serviços de saúde e tem vindo a ocupar um lugar progressivamente mais importante na avaliação da qualidade dos mesmos. Neste âmbito idealizou-se desenvolver uma plataforma digital para análise da satisfação dos utentes de unidades de saúde. O estudo inicial sobre o conceito da satisfação de consumidores e utentes permitiu consolidar os conceitos associados à temática em estudo. Conhecer as oito dimensões que, de acordo com os investigadores englobam a satisfação do utente é um dos pontos relevantes do estudo inicial. Para avaliar junto do utente a sua satisfação é necessário questiona-lo diretamente. Para efeito desenvolveu-se um inquérito de satisfação estudando cuidadosamente cada um dos elementos que deste fazem parte. No desenvolvimento do inquérito de satisfação foram seguidas as seguintes etapas: Planeamento do questionário, partindo das oito dimensões da satisfação do utente até às métricas que serão avaliadas junto do utente; Análise dos dados a recolher, definindo-se, para cada métrica, se os dados serão nominais, ordinais ou provenientes de escalas balanceadas; Por último a formulação das perguntas do inquérito de satisfação foi alvo de estudo cuidado para garantir que o utente percecione da melhor forma o objetivo da questão. A definição das especificações da plataforma e do questionário passou por diferentes estudos, entre eles uma análise de benchmarking[2], que permitiram definir que o inquérito iv estará localizado numa zona acessível da unidade de saúde, será respondido com recurso a um ecrã táctil (tablet) e que estará alojado na web. As aplicações web desenvolvidas atualmente apresentam um design apelativo e intuitivo. Foi fundamental levar a cabo um estudo do design da aplicação web, como garantia que as cores utilizadas, o tipo de letra, e o local onde a informação são os mais adequados. Para desenvolver a aplicação web foi utilizada a linguagem de programação Ruby, com recurso à framework Ruby on Rails. Para a implementação da aplicação foram estudadas as diferentes tecnologias disponíveis, com enfoque no estudo do sistema de gestão de base de dados a utilizar. O desenvolvimento da aplicação web teve também como objetivo melhorar a gestão da informação gerada pelas respostas ao inquérito de satisfação. O colaborador da MedSUPPORT é o responsável pela gestão da informação pelo que as suas necessidades foram atendidas. Um menu para a gestão da informação é disponibilizado ao administrador da aplicação, colaborador MedSUPPORT. O menu de gestão da informação permitirá uma análise simplificada do estado atual com recurso a um painel do tipo dashboard e, a fim de melhorar a análise interna dos dados terá uma função de exportação dos dados para folha de cálculo. Para validação do estudo efetuado foram realizados os testes de funcionamento à plataforma, tanto à sua funcionalidade como à sua utilização em contexto real pelos utentes inquiridos nas unidades de saúde. Os testes em contexto real objetivaram validar o conceito junto dos utentes inquiridos.
Resumo:
Magnetic nanoparticles (MNPs) are known for the unique properties conferred by their small size and have found wide application in food safety analyses. However, their high surface energy and strong magnetization often lead to aggregation, compromising their functions. In this study, iron oxide magnetic particles (MPs) over the range of nano to micro size were synthesized, from which particles with less aggregation and excellent magnetic properties were obtained. MPs were synthesized via three different hydrothermal procedures, using poly (acrylic acid) (PAA) of different molecular weight (Mw) as the stabilizer. The particle size, morphology, and magnetic properties of the MPs from these synthesis procedures were characterized and compared. Among the three syntheses, one-step hydrothermal synthesis demonstrated the highest yield and most efficient magnetic collection of the resulting PAA-coated magnetic microparticles (PAA-MMPs, >100 nm). Iron oxide content of these PAA-MMPs was around 90%, and the saturation magnetization ranged from 70.3 emu/g to 57.0 emu/g, depending on the Mw of PAA used. In this approach, the particles prepared using PAA with Mw of 100K g/mol exhibited super-paramagnetic behavior with ~65% lower coercivity and remanence compared to others. They were therefore less susceptible to aggregation and remained remarkably water-dispersible even after one-month storage. Three applications involving PAA-MMPs from one-step hydrothermal synthesis were explored: food proteins and enzymes immobilization, antibody conjugation for pathogen capture, and magnetic hydrogel film fabrication. These studies demonstrated their versatile functions as well as their potential applications in the food science area.
Resumo:
Este artículo resume el proceso de implementación del Laboratorio de Televisión Digital (DTV) de la Universidad de Cuenca, que surge como un entorno confiable de experimentación e investigación que hace uso de las características asociadas al estándar ISDB-Tb adoptado por Ecuador en el año 2010 para la transmisión de señales de televisión abierta. El objetivo de este artículo es documentar los aspectos que se han considerado para simular un escenario real en el que un Transport Stream (TS) formado por contenido audiovisual y aplicaciones interactivas, primero se genera, para luego transmitirse a través del canal de comunicaciones, y finalmente ser recibido en una televisión con receptor ISDB-Tb. Así, se facilita el desarrollo y la experimentación de nuevos servicios aprovechando el nuevo formato de DTV.
Resumo:
Maintaining accessibility to and understanding of digital information over time is a complex challenge that often requires contributions and interventions from a variety of individuals and organizations. The processes of preservation planning and evaluation are fundamentally implicit and share similar complexity. Both demand comprehensive knowledge and understanding of every aspect of to-be-preserved content and the contexts within which preservation is undertaken. Consequently, means are required for the identification, documentation and association of those properties of data, representation and management mechanisms that in combination lend value, facilitate interaction and influence the preservation process. These properties may be almost limitless in terms of diversity, but are integral to the establishment of classes of risk exposure, and the planning and deployment of appropriate preservation strategies. We explore several research objectives within the course of this thesis. Our main objective is the conception of an ontology for risk management of digital collections. Incorporated within this are our aims to survey the contexts within which preservation has been undertaken successfully, the development of an appropriate methodology for risk management, the evaluation of existing preservation evaluation approaches and metrics, the structuring of best practice knowledge and lastly the demonstration of a range of tools that utilise our findings. We describe a mixed methodology that uses interview and survey, extensive content analysis, practical case study and iterative software and ontology development. We build on a robust foundation, the development of the Digital Repository Audit Method Based on Risk Assessment. We summarise the extent of the challenge facing the digital preservation community (and by extension users and creators of digital materials from many disciplines and operational contexts) and present the case for a comprehensive and extensible knowledge base of best practice. These challenges are manifested in the scale of data growth, the increasing complexity and the increasing onus on communities with no formal training to offer assurances of data management and sustainability. These collectively imply a challenge that demands an intuitive and adaptable means of evaluating digital preservation efforts. The need for individuals and organisations to validate the legitimacy of their own efforts is particularly prioritised. We introduce our approach, based on risk management. Risk is an expression of the likelihood of a negative outcome, and an expression of the impact of such an occurrence. We describe how risk management may be considered synonymous with preservation activity, a persistent effort to negate the dangers posed to information availability, usability and sustainability. Risk can be characterised according to associated goals, activities, responsibilities and policies in terms of both their manifestation and mitigation. They have the capacity to be deconstructed into their atomic units and responsibility for their resolution delegated appropriately. We continue to describe how the manifestation of risks typically spans an entire organisational environment, and as the focus of our analysis risk safeguards against omissions that may occur when pursuing functional, departmental or role-based assessment. We discuss the importance of relating risk-factors, through the risks themselves or associated system elements. To do so will yield the preservation best-practice knowledge base that is conspicuously lacking within the international digital preservation community. We present as research outcomes an encapsulation of preservation practice (and explicitly defined best practice) as a series of case studies, in turn distilled into atomic, related information elements. We conduct our analyses in the formal evaluation of memory institutions in the UK, US and continental Europe. Furthermore we showcase a series of applications that use the fruits of this research as their intellectual foundation. Finally we document our results in a range of technical reports and conference and journal articles. We present evidence of preservation approaches and infrastructures from a series of case studies conducted in a range of international preservation environments. We then aggregate this into a linked data structure entitled PORRO, an ontology relating preservation repository, object and risk characteristics, intended to support preservation decision making and evaluation. The methodology leading to this ontology is outlined, and lessons are exposed by revisiting legacy studies and exposing the resource and associated applications to evaluation by the digital preservation community.
Resumo:
A utilização das tecnologias é considerada um meio eficaz para trabalhar conteúdos académicos com alunos com Perturbações do Espetro do Autismo (PEA) possibilitando a criação de ambientes criativos e construtivos onde se podem desenvolver atividades diferenciadas, significativas e de qualidade. Contudo, o desenvolvimento de aplicações tecnológicas para crianças e jovens com PEA continua a merecer pouca atenção, nomeadamente no que respeita à promoção do raciocínio dedutivo, apesar desta ser uma área de grande interesse para indivíduos com esta perturbação. Para os alunos com PEA, o desenvolvimento do raciocínio matemático torna-se crucial, considerando a importância destas competências para o sucesso de uma vida autónoma. Estas evidências revelam o contributo inovador que o ambiente de aprendizagem descrito nesta comunicação poderá dar nesta área. O desenvolvimento deste ambiente começou por uma etapa de criação e validação de um modelo que permitiu especificar e prototipar a solução desenvolvida que oferece modalidades de adaptação dinâmica das atividades propostas ao perfil do utilizador, procurando promover o desenvolvimento do raciocínio matemático (indutivo e dedutivo). Considerando a heterogeneidade das PEA, o ambiente desenvolvido baseia-se em modalidades de adaptação dinâmica e em atividades ajustadas ao perfil dos utilizadores. Nesta comunicação procurámos dar a conhecer o trabalho de investigação já desenvolvido, bem como perspetivar a continuidade do trabalho a desenvolver.