980 resultados para TDDFT calculations
Resumo:
Dissertação para obtenção do Grau de Mestre em Bioorgânica
Resumo:
RESUMO: A dor lombar crónica (DLC) é uma das condições clínicas mais comuns e com elevados custos socioeconómicos no mundo ocidental. Estudos recentes indicam que os utentes com DLC apresentam diferentes padrões de atividade que influenciam os níveis de incapacidade funcional. Contudo, a evidência acerca destas associações é, ainda, limitada e inconclusiva. Em Portugal, não existe, do nosso conhecimento, nenhuma escala validada para a população portuguesa que meça estes padrões de atividade em utentes com DLC. Objetivos: Adaptar culturalmente a escala Patterns of Activity Measure – Pain (POAM-P) para a população portuguesa com dor lombar crónica inespecífica (DLCI) e contribuir para a sua validação. Metodologia: A versão original (inglesa) do POAM-P foi traduzida e adaptada para a língua portuguesa (POAM-P-VP) através de uma equipa multidisciplinar que incluiu tradutores, retrotradutores (cegos e independentes), peritos de diferentes áreas e utentes com DLCI, de acordo com as recomendações de linhas orientadoras atuais para este processo. A análise factorial e das propriedades psicométricas da POAM-P-VP contou com uma amostra de 132 utentes. A consistência interna foi analisada através do coeficiente alpha de Cronbach (α) e para a análise da fiabilidade teste-reteste recorreu-se ao coeficiente de correlação intraclasse (ICC:2,1). A análise da validade de construto convergente e discriminativa das componentes da POAM-P-VP foi conseguida através da aplicação da versão portuguesa da escala Tampa Scale of Kinesiophobia (TSK-13-VP), e recorrendo ao cálculo do coeficiente de Spearman. Todos os cálculos estatísticos foram realizados no software IBM SPSS Statistics (versão 20). Resultados: A análise factorial permitiu identificar três componentes da POAM-P-VP (evitamento, persistência excessiva e persistência consistente com a dor), sendo estruturalmente diferentes das subescalas do POAM-P original. Estas componentes apresentaram uma consistência interna boa a elevada. As componentes 1 e 2 apresentaram uma fiabilidade teste-reteste moderada a excelente, e a componente 3 uma fiabilidade teste-reteste pobre, limitando o seu poder de uso na prática clínica e em investigação. Relativamente à validade de construto, nenhuma das hipóteses estabelecidas no estudo apriori foram verificadas, não podendo aferir acerca da relação dos padrões de atividade com a cinesiofobia, medida pelo TSK-13-VP. Porém, a componente de evitamento da POAM-P-VP parece medir conteúdos partilhados com a TSK-13-VP (rs = 0.15, p<0.048). Conclusão: A adaptação e contributo para a validação da versão portuguesa da escala POAM-P constituiu um ponto de partida para a existência de um instrumento de medição de padrões de atividade de utentes portugueses com DLC, requerendo mais estudos para a sua validação. Apesar de algumas limitações, considera-se que este estudo é de grande importância para os fisioterapeutas e investigadores que buscam um maior conhecimento e efetividade das abordagens de intervenção em utentes com dor lombar crónica.-------------- ABSTRACT: Chronic low back pain (CLBP) is one of the most common clinical conditions as well as one with high economical costs within western countries. Recent studies have shown that patients with LBP present different patterns of activity which influence their levels of functional capacity. However, evidence on these associations is still limited and inconclusive. To our knowledge, there is in Portugal no valid scale for measuring these patterns of activity in CLBP patients. Purpose: Culturally adapt the Patterns of Activity Measure – Pain (POAM-P) scale to the Portuguese population with non-specific chronic low back pain (NSLBP) and contribute to its validation. Method: The original English version of POAM-P was blindly and independently translated, back translated and adapted to the Portuguese language (POAM-P-VP) by a multidisciplinary team of translators, experts from different fields, and patients with NSLBP, according to established guidelines for this process. Factorial and psychometric properties’ analysis of POAM-P-VP comprised a sample of 132 patients. The internal consistency was analyzed based on Cronbach's alpha-coefficient (α) and for test-retest reliability analysis the Intraclass Correlation Coefficient (ICC) was used. The analysis of convergent and discriminant construct validity of POAM-P-VP components was achieved through the use of the Portuguese version of the Tampa Scale of Kinesiophobia (TSK-13-VP), using the Spearman coefficient calculation. All statistical calculations were performed using IBM SPSS Statistics software (v.20). Results: The factor analysis allowed for the identification of three components of POAM-P-VP (avoidance, excessive persistence and pain-contingent persistence), structurally different from the original POAM-P subscales. These components demonstrated a good to high level of internal consistency. Components 1 and 2 demonstrated moderate to excellent test-retest reliability, whereas component 3 presented low test-retest reliability thus limiting its clinical and investigative use. With regard to construct validity, none of the previously established hypothesis was verified, therefore not making it possible to assess the relation between activity patterns and kinesiophobia, measured by TSK-13-VP. However, the avoidance component of POAM-P-VP seems to share measurable contents with TSK-13-VP (rs = 0.15, p<0.048). Conclusion: The adaptation and contribution to the validation of the Portuguese version of POAM-P scale, sets a starting point to the existence of a useful instrument for measuring activity patterns in Portuguese CLBP patients, requiring further studies towards its validation. Despite some limitations, this study is considered of high importance to physiotherapists as well as investigators in search of deeper knowledge and effective practical approaches on chronic low back pain patients.
Resumo:
Breast cancer is the most common type of cancer among women all over the world. An important issue that is not commonly addressed in breast cancer imaging literature is the importance of imaging the underarm region—where up to 80% of breast cancer cells can metastasise to. The first axillary lymph nodes to receive drainage from the primary tumour in the breast are called Sentinel Node. If cancer cells are found in the Sentinel Node, there is an increased risk of metastatic breast cancer which makes this evaluation crucial to decide what follow-up exams and therapy to follow. However, non-invasive detection of cancer cells in the lymph nodes is often inconclusive, leading to the surgical removal of too many nodes which causes adverse side-effects for patients. Microwave Imaging is one of the most promising non-invasive imaging modalities for breast cancer early screening and monitoring. This novel study tests the feasibility of imaging the axilla region by means of the simulation of an Ultra-Wideband Microwave Imaging system. Simulations of such system are completed in several 2D underarm models that mimic the axilla. Initial imaging results are obtained by means of processing the simulated backscattered signals by eliminating artefacts caused by the skin and beamforming the processed signals in order to time-align all the signals recorded at each antenna. In this dissertation several image formation algorithms are implemented and compared by visual inspection of the resulting images and through a range of performance metrics, such as Signal-to-Clutter Ratio and FullWidth Half Maximum calculations. The results in this study showed that Microwave Imaging is a promising technique that might allow to identify the presence and location of metastasised cancer cells in axillary lymph nodes, enabling the non-invasive evaluation of breast cancer staging.
Resumo:
RESUMO: O cancro de mama e o mais frequente diagnoticado a indiv duos do sexo feminino. O conhecimento cientifico e a tecnologia tem permitido a cria ção de muitas e diferentes estrat egias para tratar esta patologia. A Radioterapia (RT) est a entre as diretrizes atuais para a maioria dos tratamentos de cancro de mama. No entanto, a radia ção e como uma arma de dois canos: apesar de tratar, pode ser indutora de neoplasias secund arias. A mama contralateral (CLB) e um orgão susceptivel de absorver doses com o tratamento da outra mama, potenciando o risco de desenvolver um tumor secund ario. Nos departamentos de radioterapia tem sido implementadas novas tecnicas relacionadas com a radia ção, com complexas estrat egias de administra ção da dose e resultados promissores. No entanto, algumas questões precisam de ser devidamente colocadas, tais como: E seguro avançar para tecnicas complexas para obter melhores indices de conformidade nos volumes alvo, em radioterapia de mama? O que acontece aos volumes alvo e aos tecidos saudaveis adjacentes? Quão exata e a administração de dose? Quais são as limitações e vantagens das técnicas e algoritmos atualmente usados? A resposta a estas questões e conseguida recorrendo a m etodos de Monte Carlo para modelar com precisão os diferentes componentes do equipamento produtor de radia ção(alvos, ltros, colimadores, etc), a m de obter uma descri cão apropriada dos campos de radia cão usados, bem como uma representa ção geometrica detalhada e a composição dos materiais que constituem os orgãos e os tecidos envolvidos. Este trabalho visa investigar o impacto de tratar cancro de mama esquerda usando diferentes tecnicas de radioterapia f-IMRT (intensidade modulada por planeamento direto), IMRT por planeamento inverso (IMRT2, usando 2 feixes; IMRT5, com 5 feixes) e DCART (arco conformacional dinamico) e os seus impactos em irradia ção da mama e na irradia ção indesejada dos tecidos saud aveis adjacentes. Dois algoritmos do sistema de planeamento iPlan da BrainLAB foram usados: Pencil Beam Convolution (PBC) e Monte Carlo comercial iMC. Foi ainda usado um modelo de Monte Carlo criado para o acelerador usado (Trilogy da VARIAN Medical Systems), no c odigo EGSnrc MC, para determinar as doses depositadas na mama contralateral. Para atingir este objetivo foi necess ario modelar o novo colimador multi-laminas High- De nition que nunca antes havia sido simulado. O modelo desenvolvido est a agora disponí vel no pacote do c odigo EGSnrc MC do National Research Council Canada (NRC). O acelerador simulado foi validado com medidas realizadas em agua e posteriormente com c alculos realizados no sistema de planeamento (TPS).As distribui ções de dose no volume alvo (PTV) e a dose nos orgãos de risco (OAR) foram comparadas atrav es da an alise de histogramas de dose-volume; an alise estati stica complementar foi realizadas usando o software IBM SPSS v20. Para o algoritmo PBC, todas as tecnicas proporcionaram uma cobertura adequada do PTV. No entanto, foram encontradas diferen cas estatisticamente significativas entre as t ecnicas, no PTV, nos OAR e ainda no padrão da distribui ção de dose pelos tecidos sãos. IMRT5 e DCART contribuem para maior dispersão de doses baixas pelos tecidos normais, mama direita, pulmão direito, cora cão e at e pelo pulmão esquerdo, quando comparados com as tecnicas tangenciais (f-IMRT e IMRT2). No entanto, os planos de IMRT5 melhoram a distribuição de dose no PTV apresentando melhor conformidade e homogeneidade no volume alvo e percentagens de dose mais baixas nos orgãos do mesmo lado. A t ecnica de DCART não apresenta vantagens comparativamente com as restantes t ecnicas investigadas. Foram tamb em identi cadas diferen cas entre os algoritmos de c alculos: em geral, o PBC estimou doses mais elevadas para o PTV, pulmão esquerdo e cora ção, do que os algoritmos de MC. Os algoritmos de MC, entre si, apresentaram resultados semelhantes (com dferen cas at e 2%). Considera-se que o PBC não e preciso na determina ção de dose em meios homog eneos e na região de build-up. Nesse sentido, atualmente na cl nica, a equipa da F sica realiza medi ções para adquirir dados para outro algoritmo de c alculo. Apesar de melhor homogeneidade e conformidade no PTV considera-se que h a um aumento de risco de cancro na mama contralateral quando se utilizam t ecnicas não-tangenciais. Os resultados globais dos estudos apresentados confirmam o excelente poder de previsão com precisão na determinação e c alculo das distribui ções de dose nos orgãos e tecidos das tecnicas de simulação de Monte Carlo usados.---------ABSTRACT:Breast cancer is the most frequent in women. Scienti c knowledge and technology have created many and di erent strategies to treat this pathology. Radiotherapy (RT) is in the actual standard guidelines for most of breast cancer treatments. However, radiation is a two-sword weapon: although it may heal cancer, it may also induce secondary cancer. The contralateral breast (CLB) is a susceptible organ to absorb doses with the treatment of the other breast, being at signi cant risk to develop a secondary tumor. New radiation related techniques, with more complex delivery strategies and promising results are being implemented and used in radiotherapy departments. However some questions have to be properly addressed, such as: Is it safe to move to complex techniques to achieve better conformation in the target volumes, in breast radiotherapy? What happens to the target volumes and surrounding healthy tissues? How accurate is dose delivery? What are the shortcomings and limitations of currently used treatment planning systems (TPS)? The answers to these questions largely rely in the use of Monte Carlo (MC) simulations using state-of-the-art computer programs to accurately model the di erent components of the equipment (target, lters, collimators, etc.) and obtain an adequate description of the radiation elds used, as well as the detailed geometric representation and material composition of organs and tissues. This work aims at investigating the impact of treating left breast cancer using di erent radiation therapy (RT) techniques f-IMRT (forwardly-planned intensity-modulated), inversely-planned IMRT (IMRT2, using 2 beams; IMRT5, using 5 beams) and dynamic conformal arc (DCART) RT and their e ects on the whole-breast irradiation and in the undesirable irradiation of the surrounding healthy tissues. Two algorithms of iPlan BrainLAB TPS were used: Pencil Beam Convolution (PBC)and commercial Monte Carlo (iMC). Furthermore, an accurate Monte Carlo (MC) model of the linear accelerator used (a Trilogy R VARIANR) was done with the EGSnrc MC code, to accurately determine the doses that reach the CLB. For this purpose it was necessary to model the new High De nition multileaf collimator that had never before been simulated. The model developed was then included on the EGSnrc MC package of National Research Council Canada (NRC). The linac was benchmarked with water measurements and later on validated against the TPS calculations. The dose distributions in the planning target volume (PTV) and the dose to the organs at risk (OAR) were compared analyzing dose-volume histograms; further statistical analysis was performed using IBM SPSS v20 software. For PBC, all the techniques provided adequate coverage of the PTV. However, statistically significant dose di erences were observed between the techniques, in the PTV, OAR and also in the pattern of dose distribution spreading into normal tissues. IMRT5 and DCART spread low doses into greater volumes of normal tissue, right breast, right lung, heart and even the left lung than tangential techniques (f-IMRT and IMRT2). However,IMRT5 plans improved distributions for the PTV, exhibiting better conformity and homogeneity in target and reduced high dose percentages in ipsilateral OAR. DCART did not present advantages over any of the techniques investigated. Di erences were also found comparing the calculation algorithms: PBC estimated higher doses for the PTV, ipsilateral lung and heart than the MC algorithms predicted. The MC algorithms presented similar results (within 2% di erences). The PBC algorithm was considered not accurate in determining the dose in heterogeneous media and in build-up regions. Therefore, a major e ort is being done at the clinic to acquire data to move from PBC to another calculation algorithm. Despite better PTV homogeneity and conformity there is an increased risk of CLB cancer development, when using non-tangential techniques. The overall results of the studies performed con rm the outstanding predictive power and accuracy in the assessment and calculation of dose distributions in organs and tissues rendered possible by the utilization and implementation of MC simulation techniques in RT TPS.
Resumo:
25th International Cryogenic Engineering Conference and the International Cryogenic Materials Conference in 2014, ICEC 25–ICMC 2014
Resumo:
The role of a set of gases relevant within the context of biomolecules and technologically relevant molecules under the interaction of low-energy electrons was studied in an effort to contribute to the understanding of the underlying processes yielding negative ion formation. The results are relevant within the context of damage to living material exposed to energetic radiation, to the role of dopants in the ion-molecule chemistry processes, to Electron Beam Induced Deposition (EBID) and Ion Beam Induced Deposition (IBID) techniques. The research described in this thesis addresses dissociative electron attachment (DEA) and electron transfer studies involving experimental setups from the University of Innsbruck, Austria and Universidade Nova de Lisboa, Portugal, respectively. This thesis presents DEA studies, obtained by a double focusing mass spectrometer, of dimethyl disulphide (C2H6S2), two isomers, enflurane and isoflurane (C3F5Cl5) and two chlorinated ethanes, pentachloroethane (C2HCl5) and hexachloroethane (C2Cl6), along with quantum chemical calculations providing information on the molecular orbitals as well as thermochemical thresholds of anion formation for enflurane, isoflurane, pentachloroethane and hexachloroethane. The experiments represent the most accurate DEA studies to these molecules, with significant differences from previous work reported in the literature. As far as electron transfer studies are concerned, negative ion formation in collisions of neutral potassium atoms with N1 and N3 methylated pyrimidine molecules were obtained by time-of-flight mass spectrometry (TOF). The results obtained allowed to propose concerted mechanisms for site and bond selective excision of bonds.
Resumo:
The hospital pharmacy in large and advanced institutions has evolved from a simple storage and distribution unit into a highly specialized manipulation and dispensation center, responsible for the handling of hundreds of clinical requests, many of them unique and not obtainable from commercial companies. It was therefore quite natural that in many environments, a manufacturing service was gradually established, to cater to both conventional and extraordinary demands of the medical staff. That was the case of Hospital das Clinicas, where multiple categories of drugs are routinely produced inside the pharmacy. However, cost-containment imperatives dictate that such activities be reassessed in the light of their efficiency and essentiality. METHODS: In a prospective study, the output of the Manufacturing Service of the Central Pharmacy during a 12-month period was documented and classified into three types. Group I comprised drugs similar to commercially distributed products, Group II included exclusive formulations for routine consumption, and Group III dealt with special demands related to clinical investigations. RESULTS: Findings for the three categories indicated that these groups represented 34.4%, 45.3%, and 20.3% of total manufacture orders, respectively. Costs of production were assessed and compared with market prices for Group 1 preparations, indicating savings of 63.5%. When applied to the other groups, for which direct equivalent in market value did not exist, these results would suggest total yearly savings of over 5 100 000 US dollars. Even considering that these calculations leave out many components of cost, notably those concerning marketing and distribution, it might still be concluded that at least part of the savings achieved were real. CONCLUSIONS: The observed savings, allied with the convenience and reliability with which the Central Pharmacy performed its obligations, support the contention that internal manufacture of pharmaceutical formulations was a cost-effective alternative in the described setting.
Resumo:
In order to address and resolve the wastewater contamination problem of the Sines refinery with the main objective of optimizing the quality of this stream and reducing the costs charged to the refinery, a dynamic mass balance was developed nd implemented for ammonia and polar oil and grease (O&G) contamination in the wastewater circuit. The inadequate routing of sour gas from the sour water stripping unit and the kerosene caustic washing unit, were identified respectively as the major source of ammonia and polar substances present in the industrial wastewater effluent. For the O&G content, a predictive model was developed for the kerosene caustic washing unit, following the Projection to Latent Structures (PLS) approach. Comparison between analytical data for ammonia and polar O&G concentrations in refinery wastewater originating from the Dissolved Air Flotation (DAF) effluent and the model predictions of the dynamic mass balance calculations are in a very good agreement and highlights the dominant impact of the identified streams for the wastewater contamination levels. The ammonia contamination problem was solved by rerouting the sour gas through an existing clogged line with ammonia salts due to a non-insulated line section, while for the O&G a dynamic mass balance was implemented as an online tool, which allows for prevision of possible contamination situations and taking the required preventive actions, and can also serve as a basis for establishing relationships between the O&G contamination in the refinery wastewater with the properties of the refined crude oils and the process operating conditions. The PLS model developed could be of great asset in both optimizing the existing and designing new refinery wastewater treatment units or reuse schemes. In order to find a possible treatment solution for the spent caustic problem, an on-site pilot plant experiments for NaOH recovery from the refinery kerosene caustic washing unit effluent using an alkaline-resistant nanofiltration (NF) polymeric membrane were performed in order to evaluate its applicability for treating these highly alkaline and contaminated streams. For a constant operating pressure and temperature and adequate operating conditions, 99.9% of oil and grease rejection and 97.7% of chemical oxygen demand (COD) rejection were observed. No noticeable membrane fouling or flux decrease were registered until a volume concentration factor of 3. These results allow for NF permeate reuse instead of fresh caustic and for significant reduction of the wastewater contamination, which can result in savings of 1.5 M€ per year at the current prices for the largest Portuguese oil refinery. The capital investments needed for implementation of the required NF membrane system are less than 10% of those associated with the traditional wet air oxidation solution of the spent caustic problem. The operating costs are very similar, but can be less than half if reusing the NF concentrate in refinery pH control applications. The payback period was estimated to be 1.1 years. Overall, the pilot plant experimental results obtained and the process economic evaluation data indicate a very competitive solution through the proposed NF treatment process, which represents a highly promising alternative to conventional and existing spent caustic treatment units.
Resumo:
Joints play a major role in the structural behaviour of old timber frames [1]. Current standards mainly focus on modern dowel-type joints and usually provide little guidance (with the exception of German and Swiss NAs) to designers regarding traditional joints. With few exceptions, see e.g. [2], [3], [4], most of the research undertaken today is mainly focused on the reinforcement of dowel-type connections. When considering old carpentry joints, it is neither realistic nor useful to try to describe the behaviour of each and every type of joint. The discussion here is not an extra attempt to classify or compare joint configurations [5], [6], [7]. Despite the existence of some classification rules which define different types of carpentry joints, their applicability becomes difficult. This is due to the differences in the way joints are fashioned depending, on the geographical location and their age. In view of this, it is mandatory to check the relevance of the calculations as a first step. This first step, to, is mandatory. A limited number of carpentry joints, along with some calculation rules and possible strengthening techniques are presented here.
Resumo:
Accepted Manuscript
Resumo:
Dissertação de mestrado integrado em Engenharia e Gestão Industrial
Resumo:
In the present work the benefits of using graphics processing units (GPU) to aid the design of complex geometry profile extrusion dies, are studied. For that purpose, a3Dfinite volume based code that employs unstructured meshes to solve and couple the continuity, momentum and energy conservation equations governing the fluid flow, together with aconstitutive equation, was used. To evaluate the possibility of reducing the calculation time spent on the numerical calculations, the numerical code was parallelized in the GPU, using asimple programing approach without complex memory manipulations. For verificationpurposes, simulations were performed for three benchmark problems: Poiseuille flow, lid-driven cavity flow and flow around acylinder. Subsequently, the code was used on the design of two real life extrusion dies for the production of a medical catheter and a wood plastic composite decking profile. To evaluate the benefits, the results obtained with the GPU parallelized code were compared, in terms of speedup, with a serial implementation of the same code, that traditionally runs on the central processing unit (CPU). The results obtained show that, even with the simple parallelization approach employed, it was possible to obtain a significant reduction of the computation times.
Resumo:
The monitoring data collected during tunnel excavation can be used in inverse analysis procedures in order to identify more realistic geomechanical parameters that can increase the knowledge about the interested formations. These more realistic parameters can be used in real time to adapt the project to the real structure in situ behaviour. However, monitoring plans are normally designed for safety assessment and not especially for the purpose of inverse analysis. In fact, there is a lack of knowledge about what types and quantity of measurements are needed to succeed in identifying the parameters of interest. Also, the optimisation algorithm chosen for the identification procedure may be important for this matter. In this work, this problem is addressed using a theoretical case with which a thorough parametric study was carried out using two optimisation algorithms based on different calculation paradigms, namely a conventional gradient-based algorithm and an evolution strategy algorithm. Calculations were carried for different sets of parameters to identify several combinations of types and amount of monitoring data. The results clearly show the high importance of the available monitoring data and the chosen algorithm for the success rate of the inverse analysis process.
Resumo:
The observational method in tunnel engineering allows the evaluation in real time of the actual conditions of the ground and to take measures if its behavior deviates considerably from predictions. However, it lacks a consistent and structured methodology to use the monitoring data to adapt the support system in real time. The definition of limit criteria above which adaptation is required are not defined and complex inverse analysis procedures (Rechea et al. 2008, Levasseur et al. 2010, Zentar et al. 2001, Lecampion et al. 2002, Finno and Calvello 2005, Goh 1999, Cui and Pan 2012, Deng et al. 2010, Mathew and Lehane 2013, Sharifzadeh et al. 2012, 2013) may be needed to consistently analyze the problem. In this paper a methodology for the real time adaptation of the support systems during tunneling is presented. In a first step limit criteria for displacements and stresses are proposed. The methodology uses graphics that are constructed during the project stage based on parametric calculations to assist in the process and when these graphics are not available, since it is not possible to predict every possible scenario, inverse analysis calculations are carried out. The methodology is applied to the “Bois de Peu” tunnel which is composed by two tubes with over 500 m long. High uncertainty levels existed concerning the heterogeneity of the soil and consequently in the geomechanical design parameters. The methodology was applied in four sections and the results focus on two of them. It is shown that the methodology has potential to be applied in real cases contributing for a consistent approach of a real time adaptation of the support system and highlight the importance of the existence of good quality and specific monitoring data to improve the inverse analysis procedure.
Resumo:
Various differential cross-sections are measured in top-quark pair (tt¯) events produced in proton--proton collisions at a centre-of-mass energy of s√=7 TeV at the LHC with the ATLAS detector. These differential cross-sections are presented in a data set corresponding to an integrated luminosity of 4.6 fb−1. The differential cross-sections are presented in terms of kinematic variables of a top-quark proxy referred to as the pseudo-top-quark whose dependence on theoretical models is minimal. The pseudo-top-quark can be defined in terms of either reconstructed detector objects or stable particles in an analogous way. The measurements are performed on tt¯ events in the lepton+jets channel, requiring exactly one charged lepton and at least four jets with at least two of them tagged as originating from a b-quark. The hadronic and leptonic pseudo-top-quarks are defined via the leptonic or hadronic decay mode of the W boson produced by the top-quark decay in events with a single charged lepton.The cross-section is measured as a function of the transverse momentum and rapidity of both the hadronic and leptonic pseudo-top-quark as well as the transverse momentum, rapidity and invariant mass of the pseudo-top-quark pair system. The measurements are corrected for detector effects and are presented within a kinematic range that closely matches the detector acceptance. Differential cross-section measurements of the pseudo-top-quark variables are compared with several Monte Carlo models that implement next-to-leading order or leading-order multi-leg matrix-element calculations.