855 resultados para Primary physical image
Resumo:
The objective of this integrative review is to analyze the scientific production addressing the sexuality of women with breast cancer following mastectomy, focused on the effects that the physical discomfort due to cancer treatments have on their sex life. The search included articles published in the period between 2000 and 2009 on the MEDLINE, LILACS and PsycINFO databases, using the following descriptors: mastectomy, breast neoplasms, sexuality, sexual behavior, amputation, psychosexual development, and marital relations. Nine articles were selected, which addressed the effects of the physical discomfort from cancer treatments on the patients' sexuality. The findings revealed that, even when the patient's sex life is intense and fulfilling before the disease, factors such as stress, pain, fatigue, insult to body image, and low self-esteem due to the treatments may alter the sexual functioning of the affected woman. Healthcare professionals must be sensitized in order to welcome and include the topic in policies as well as in preventive, diagnostic, and therapeutic strategies.
Resumo:
Intravascular ultrasound (IVUS) phantoms are important to calibrate and evaluate many IVUS imaging processing tasks. However, phantom generation is never the primary focus of related works; hence, it cannot be well covered, and is usually based on more than one platform, which may not be accessible to investigators. Therefore, we present a framework for creating representative IVUS phantoms, for different intraluminal pressures, based on the finite element method and Field II. First, a coronary cross-section model is selected. Second, the coronary regions are identified to apply the properties. Third, the corresponding mesh is generated. Fourth, the intraluminal force is applied and the deformation computed. Finally, the speckle noise is incorporated. The framework was tested taking into account IVUS contrast, noise and strains. The outcomes are in line with related studies and expected values. Moreover, the framework toolbox is freely accessible and fully implemented in a single platform. (E-mail: fernando.okara@gmail.com) (c) 2012 World Federation for Ultrasound in Medicine & Biology.
Resumo:
Antiphospholipid antibodies (aPL) and antiphospholipid syndrome (APS) have been described in primary Sjogren's syndrome (pSS) with controversial findings regarding aPL prevalence and their association with thrombotic events. We evaluated 100 consecutive pSS patients (American-European criteria) and 89 age-gender-ethnicity-matched healthy controls for IgG/IgM anticardiolipin (aCL), IgG/IgM anti-beta2-glycoprotein-I (a beta 2GPI), and lupus anticoagulant (LA) (positivity according to APS Sydney's criteria). Clinical analysis followed standardized interview and physical examination assessing thrombotic and nonthrombotic APS manifestations and thrombosis risk factors. aPLs were detected in 16 % patients and 5.6 % controls (p = 0.035). LA was the most common aPL in patients (9 %), followed by a beta 2GPI (5 %) and aCL (4 %). Thrombotic events occurred in five patients [stroke in two, myocardial infarction in one and deep-vein thrombosis (DVT) in four], but in none of controls (p = 0.061). Mean age at time of stroke was 35 years. Three patients with thrombotic events (including the two with stroke) had APS (Sydney's criteria) and were positive exclusively for LA. Comparison of patients with (n = 16) and without (n = 84) aPL revealed similar mean age, female predominance, and ethnicity (p > =0.387). Frequencies of livedo reticularis (25 vs. 4.8 %, p = 0.021), stroke (12.5 vs. 0 %, p = 0.024), and DVT (18.8 vs. 1.2 %, p = 0.013) were significantly higher in APL + patients. Conversely, frequencies of hypertension, dyslipidemia, diabetes, obesity, smoking, sedentarism, and hormonal contraception were similar in patients with or without aPL (p a parts per thousand yenaEuro parts per thousand 0.253). Our study identified LA as an important marker for APS in pSS, particularly for stroke in young patients, warranting routine evaluation of these antibodies and rigorous intervention in modifiable risk factors.
Resumo:
Objectives. To evaluate the frequency of seizures in primary antiphospholipid syndrome (PAPS) and their possible clinical and laboratory associations. Methods. Eighty-eight PAPS patients (Sydney's criteria) were analyzed by a standard interview, physical examination and review of medical charts. Risk factors for seizures, clinical manifestations, associated comorbidities, and antiphospholipid antibodies were evaluated. Results. Nine (10.2%) patients with seizures were identified, 77.8% had convulsions onset after PAPS diagnosis. Mean age, gender, and race were comparable in groups with or without seizures. Interestingly, a higher frequency of current smoking (44.4 versus 10.1%, P = 0.019) was observed in the first group. Stroke, Sneddon's syndrome, and livedo reticularis were more frequent in PAPS patients with seizures than those without seizures, although not statistically significant (P > 0.05). Comparison between patients with seizures onset after PAPS diagnosis (n = 7) and those without convulsions (n = 79) demonstrated a higher frequency of current smoking (42.9 versus 10%, P = 0.042) and stroke in the first group (71.4 versus 30.4%, P = 0.041). Regression analysis confirmed that smoking (P = 0.030) and stroke (P = 0.042) were independently associated to seizures. Conclusion. About 10.2% of PAPS patients had convulsions, predominantly after PAPS diagnosis, and seizures were associated to current smoking and stroke.
Resumo:
Measurements of the sphericity of primary charged particles in minimum bias proton-proton collisions at root s = 0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is measured in the plane perpendicular to the beam direction using primary charged tracks with p(T) > 0.5 GeV/c in vertical bar eta vertical bar < 0.8. The mean sphericity as a function of the charged particle multiplicity at mid-rapidity (N-ch) is reported for events with different p(T) scales ("soft" and "hard") defined by the transverse momentum of the leading particle. In addition, the mean charged particle transverse momentum versus multiplicity is presented for the different event classes, and the sphericity distributions in bins of multiplicity are presented. The data are compared with calculations of standard Monte Carlo event generators. The transverse sphericity is found to grow with multiplicity at all collision energies, with a steeper rise at low N-ch, whereas the event generators show an opposite tendency. The combined study of the sphericity and the mean p(T) with multiplicity indicates that most of the tested event generators produce events with higher multiplicity by generating more back-to-back jets resulting in decreased sphericity (and isotropy). The PYTHIA6 generator with tune PERUGIA-2011 exhibits a noticeable improvement in describing the data, compared to the other tested generators.
Resumo:
The aim of this study was to investigate the influence of image resolution manipulation on the photogrammetric measurement of the rearfoot static angle. The study design was that of a reliability study. We evaluated 19 healthy young adults (11 females and 8 males). The photographs were taken at 1536 pixels in the greatest dimension, resized into four different resolutions (1200, 768, 600, 384 pixels) and analyzed by three equally trained examiners on a 96-pixels per inch (ppi) screen. An experienced physiotherapist marked the anatomic landmarks of rearfoot static angles on two occasions within a 1-week interval. Three different examiners had marked angles on digital pictures. The systematic error and the smallest detectable difference were calculated from the angle values between the image resolutions and times of evaluation. Different resolutions were compared by analysis of variance. Inter- and intra-examiner reliability was calculated by intra-class correlation coefficients (ICC). The rearfoot static angles obtained by the examiners in each resolution were not different (P > 0.05); however, the higher the image resolution the better the inter-examiner reliability. The intra-examiner reliability (within a 1-week interval) was considered to be unacceptable for all image resolutions (ICC range: 0.08-0.52). The whole body image of an adult with a minimum size of 768 pixels analyzed on a 96-ppi screen can provide very good inter-examiner reliability for photogrammetric measurements of rearfoot static angles (ICC range: 0.85-0.92), although the intra-examiner reliability within each resolution was not acceptable. Therefore, this method is not a proper tool for follow-up evaluations of patients within a therapeutic protocol.
Resumo:
The study of the hydro-physical behavior in soils using toposequences is of great importance for better understanding the soil, water and vegetation relationships. This study aims to assess the hydro-physical and morphological characterization of soil from a toposequence in Galia, state of São Paulo, Brazil). The plot covers an area of 10.24 ha (320 × 320 m), located in a semi-deciduous seasonal forest. Based on ultra-detailed soil and topographic maps of the area, a representative transect from the soil in the plot was chosen. Five profiles were opened for the morphological description of the soil horizons, and hydro-physical and micromorphological analyses were performed to characterize the soil. Arenic Haplustult, Arenic Haplustalf and Aquertic Haplustalf were the soil types observed in the plot. The superficial horizons had lower density and greater hydraulic conductivity, porosity and water retention in lower tensions than the deeper horizons. In the sub-superficial horizons, greater water retention at higher tensions and lower hydraulic conductivity were observed, due to structure type and greater clay content. The differences observed in the water retention curves between the sandy E and the clay B horizons were mainly due to the size distribution, shape and type of soil pores.
Resumo:
Trabajo realizado por: Garijo, J. C., Hernández León, S.
Resumo:
The subject of this doctoral dissertation concerns the definition of a new methodology for the morphological and morphometric study of fossilized human teeth, and therefore strives to provide a contribution to the reconstruction of human evolutionary history that proposes to extend to the different species of hominid fossils. Standardized investigative methodologies are lacking both regarding the orientation of teeth subject to study and in the analysis that can be carried out on these teeth once they are oriented. The opportunity to standardize a primary analysis methodology is furnished by the study of certain early Neanderthal and preneanderthal molars recovered in two caves in southern Italy [Grotta Taddeo (Taddeo Cave) and Grotta del Poggio (Poggio Cave), near Marina di Camerata, Campania]. To these we can add other molars of Neanderthal and modern man of the upper Paleolithic era, specifically scanned in the paleoanthropology laboratory of the University of Arkansas (Fayetteville, Arkansas, USA), in order to increase the paleoanthropological sample data and thereby make the final results of the analyses more significant. The new analysis methodology is rendered as follows: 1. Standardization of an orientation system for primary molars (superior and inferior), starting from a scan of a sample of 30 molars belonging to modern man (15 M1 inferior and 15 M1 superior), the definition of landmarks, the comparison of various systems and the choice of a system of orientation for each of the two dental typologies. 2. The definition of an analysis procedure that considers only the first 4 millimeters of the dental crown starting from the collar: 5 sections parallel to the plane according to which the tooth has been oriented are carried out, spaced 1 millimeter between them. The intention is to determine a method that allows for the differentiation of fossilized species even in the presence of worn teeth. 3. Results and Conclusions. The new approach to the study of teeth provides a considerable quantity of information that can better be evaluated by increasing the fossil sample data. It has been demonstrated to be a valid tool in evolutionary classification that has allowed (us) to differentiate the Neanderthal sample from that of modern man. In a particular sense the molars of Grotta Taddeo, which up until this point it has not been possible to determine with exactness their species of origin, through the present research they are classified as Neanderthal.
Resumo:
Programa de doctorado: Ingeniería de Telecomunicación Avanzada
Resumo:
The last decades have seen a large effort of the scientific community to study and understand the physics of sea ice. We currently have a wide - even though still not exhaustive - knowledge of the sea ice dynamics and thermodynamics and of their temporal and spatial variability. Sea ice biogeochemistry is instead largely unknown. Sea ice algae production may account for up to 25% of overall primary production in ice-covered waters of the Southern Ocean. However, the influence of physical factors, such as the location of ice formation, the role of snow cover and light availability on sea ice primary production is poorly understood. There are only sparse localized observations and little knowledge of the functioning of sea ice biogeochemistry at larger scales. Modelling becomes then an auxiliary tool to help qualifying and quantifying the role of sea ice biogeochemistry in the ocean dynamics. In this thesis, a novel approach is used for the modelling and coupling of sea ice biogeochemistry - and in particular its primary production - to sea ice physics. Previous attempts were based on the coupling of rather complex sea ice physical models to empirical or relatively simple biological or biogeochemical models. The focus is moved here to a more biologically-oriented point of view. A simple, however comprehensive, physical model of the sea ice thermodynamics (ESIM) was developed and coupled to a novel sea ice implementation (BFM-SI) of the Biogeochemical Flux Model (BFM). The BFM is a comprehensive model, largely used and validated in the open ocean environment and in regional seas. The physical model has been developed having in mind the biogeochemical properties of sea ice and the physical inputs required to model sea ice biogeochemistry. The central concept of the coupling is the modelling of the Biologically-Active-Layer (BAL), which is the time-varying fraction of sea ice that is continuously connected to the ocean via brines pockets and channels and it acts as rich habitat for many microorganisms. The physical model provides the key physical properties of the BAL (e.g., brines volume, temperature and salinity), and the BFM-SI simulates the physiological and ecological response of the biological community to the physical enviroment. The new biogeochemical model is also coupled to the pelagic BFM through the exchange of organic and inorganic matter at the boundaries between the two systems . This is done by computing the entrapment of matter and gases when sea ice grows and release to the ocean when sea ice melts to ensure mass conservation. The model was tested in different ice-covered regions of the world ocean to test the generality of the parameterizations. The focus was particularly on the regions of landfast ice, where primary production is generally large. The implementation of the BFM in sea ice and the coupling structure in General Circulation Models will add a new component to the latters (and in general to Earth System Models), which will be able to provide adequate estimate of the role and importance of sea ice biogeochemistry in the global carbon cycle.
Resumo:
Nowadays, it is clear that the target of creating a sustainable future for the next generations requires to re-think the industrial application of chemistry. It is also evident that more sustainable chemical processes may be economically convenient, in comparison with the conventional ones, because fewer by-products means lower costs for raw materials, for separation and for disposal treatments; but also it implies an increase of productivity and, as a consequence, smaller reactors can be used. In addition, an indirect gain could derive from the better public image of the company, marketing sustainable products or processes. In this context, oxidation reactions play a major role, being the tool for the production of huge quantities of chemical intermediates and specialties. Potentially, the impact of these productions on the environment could have been much worse than it is, if a continuous efforts hadn’t been spent to improve the technologies employed. Substantial technological innovations have driven the development of new catalytic systems, the improvement of reactions and process technologies, contributing to move the chemical industry in the direction of a more sustainable and ecological approach. The roadmap for the application of these concepts includes new synthetic strategies, alternative reactants, catalysts heterogenisation and innovative reactor configurations and process design. Actually, in order to implement all these ideas into real projects, the development of more efficient reactions is one primary target. Yield, selectivity and space-time yield are the right metrics for evaluating the reaction efficiency. In the case of catalytic selective oxidation, the control of selectivity has always been the principal issue, because the formation of total oxidation products (carbon oxides) is thermodynamically more favoured than the formation of the desired, partially oxidized compound. As a matter of fact, only in few oxidation reactions a total, or close to total, conversion is achieved, and usually the selectivity is limited by the formation of by-products or co-products, that often implies unfavourable process economics; moreover, sometimes the cost of the oxidant further penalizes the process. During my PhD work, I have investigated four reactions that are emblematic of the new approaches used in the chemical industry. In the Part A of my thesis, a new process aimed at a more sustainable production of menadione (vitamin K3) is described. The “greener” approach includes the use of hydrogen peroxide in place of chromate (from a stoichiometric oxidation to a catalytic oxidation), also avoiding the production of dangerous waste. Moreover, I have studied the possibility of using an heterogeneous catalytic system, able to efficiently activate hydrogen peroxide. Indeed, the overall process would be carried out in two different steps: the first is the methylation of 1-naphthol with methanol to yield 2-methyl-1-naphthol, the second one is the oxidation of the latter compound to menadione. The catalyst for this latter step, the reaction object of my investigation, consists of Nb2O5-SiO2 prepared with the sol-gel technique. The catalytic tests were first carried out under conditions that simulate the in-situ generation of hydrogen peroxide, that means using a low concentration of the oxidant. Then, experiments were carried out using higher hydrogen peroxide concentration. The study of the reaction mechanism was fundamental to get indications about the best operative conditions, and improve the selectivity to menadione. In the Part B, I explored the direct oxidation of benzene to phenol with hydrogen peroxide. The industrial process for phenol is the oxidation of cumene with oxygen, that also co-produces acetone. This can be considered a case of how economics could drive the sustainability issue; in fact, the new process allowing to obtain directly phenol, besides avoiding the co-production of acetone (a burden for phenol, because the market requirements for the two products are quite different), might be economically convenient with respect to the conventional process, if a high selectivity to phenol were obtained. Titanium silicalite-1 (TS-1) is the catalyst chosen for this reaction. Comparing the reactivity results obtained with some TS-1 samples having different chemical-physical properties, and analyzing in detail the effect of the more important reaction parameters, we could formulate some hypothesis concerning the reaction network and mechanism. Part C of my thesis deals with the hydroxylation of phenol to hydroquinone and catechol. This reaction is already industrially applied but, for economical reason, an improvement of the selectivity to the para di-hydroxilated compound and a decrease of the selectivity to the ortho isomer would be desirable. Also in this case, the catalyst used was the TS-1. The aim of my research was to find out a method to control the selectivity ratio between the two isomers, and finally to make the industrial process more flexible, in order to adapt the process performance in function of fluctuations of the market requirements. The reaction was carried out in both a batch stirred reactor and in a re-circulating fixed-bed reactor. In the first system, the effect of various reaction parameters on catalytic behaviour was investigated: type of solvent or co-solvent, and particle size. With the second reactor type, I investigated the possibility to use a continuous system, and the catalyst shaped in extrudates (instead of powder), in order to avoid the catalyst filtration step. Finally, part D deals with the study of a new process for the valorisation of glycerol, by means of transformation into valuable chemicals. This molecule is nowadays produced in big amount, being a co-product in biodiesel synthesis; therefore, it is considered a raw material from renewable resources (a bio-platform molecule). Initially, we tested the oxidation of glycerol in the liquid-phase, with hydrogen peroxide and TS-1. However, results achieved were not satisfactory. Then we investigated the gas-phase transformation of glycerol into acrylic acid, with the intermediate formation of acrolein; the latter can be obtained by dehydration of glycerol, and then can be oxidized into acrylic acid. Actually, the oxidation step from acrolein to acrylic acid is already optimized at an industrial level; therefore, we decided to investigate in depth the first step of the process. I studied the reactivity of heterogeneous acid catalysts based on sulphated zirconia. Tests were carried out both in aerobic and anaerobic conditions, in order to investigate the effect of oxygen on the catalyst deactivation rate (one main problem usually met in glycerol dehydration). Finally, I studied the reactivity of bifunctional systems, made of Keggin-type polyoxometalates, either alone or supported over sulphated zirconia, in this way combining the acid functionality (necessary for the dehydrative step) with the redox one (necessary for the oxidative step). In conclusion, during my PhD work I investigated reactions that apply the “green chemistry” rules and strategies; in particular, I studied new greener approaches for the synthesis of chemicals (Part A and Part B), the optimisation of reaction parameters to make the oxidation process more flexible (Part C), and the use of a bioplatform molecule for the synthesis of a chemical intermediate (Part D).
Resumo:
At ecosystem level soil respiration (Rs) represents the largest carbon (C) flux after gross primary productivity, being mainly generated by root respiration (autotrophic respiration, Ra) and soil microbial respiration (heterotrophic respiration, Rh). In the case of terrestrial ecosystems, soils contain the largest C-pool, storing twice the amount of C contained in plant biomass. Soil organic matter (SOM), representing the main C storage in soil, is decomposed by soil microbial community. This process produces CO2 which is mainly released as Rh. It is thus relevant to understand how microbial activity is influenced by environmental factors like soil temperature, soil moisture and nutrient availability, since part of the CO2 produced by Rh, directly increases atmospheric CO2 concentration and therefore affects the phenomenon of climate change. Among terrestrial ecosystems, agricultural fields have traditionally been considered as sources of atmospheric CO2. In agricultural ecosystems, in particular apple orchards, I identified the role of root density, soil temperature, soil moisture and nitrogen (N) availability on Rs and on its two components, Ra and Rh. To do so I applied different techniques to separate Rs in its two components, the ”regression technique” and the “trenching technique”. I also studied the response of Ra to different levels of N availability, distributed either in a uniform or localized way, in the case of Populus tremuloides trees. The results showed that Rs is mainly driven by soil temperature, to which it is positively correlated, that high levels of soil moisture have inhibiting effects, and that N has a negligible influence on total Rs, as well as on Ra. Further I found a negative response of Rh to high N availability, suggesting that microbial decomposition processes in the soil are inhibited by the presence of N. The contribution of Ra to Rs was of 37% on average.
Resumo:
The quality of fish products is indispensably linked to the freshness of the raw material modulated by appropriate manipulation and storage conditions, specially the storage temperature after catch. The purpose of the research presented in this thesis, which was largely conducted in the context of a research project funded by Italian Ministry of Agricultural, Food and Forestry Policies (MIPAAF), concerned the evaluation of the freshness of farmed and wild fish species, in relation to different storage conditions, under ice (0°C) or at refrigeration temperature (4°C). Several specimens of different species, bogue (Boops boops), red mullet (Mullus barbatus), sea bream (Sparus aurata) and sea bass (Dicentrarchus labrax), during storage, under the different temperature conditions adopted, have been examined. The assessed control parameters were physical (texture, through the use of a dynamometer; visual quality using a computer vision system (CVS)), chemical (through footprint metabolomics 1H-NMR) and sensory (Quality Index Method (QIM). Microbiological determinations were also carried out on the species of hake (Merluccius merluccius). In general obtained results confirmed that the temperature of manipulation/conservation is a key factor in maintaining fish freshness. NMR spectroscopy showed to be able to quantify and evaluate the kinetics for unselected compounds during fish degradation, even a posteriori. This can be suitable for the development of new parameters related to quality and freshness. The development of physical methods, particularly the image analysis performed by computer vision system (CVS), for the evaluation of fish degradation, is very promising. Among CVS parameters, skin colour, presence and distribution of gill mucus, and eye shape modification evidenced a high sensibility for the estimation of fish quality loss, as a function of the adopted storage conditions. Particularly the eye concavity index detected on fish eye showed a high positive correlation with total QIM score.
Resumo:
In patients with coronary artery disease, the size of myocardial infarction mainly determines the subsequent clinical outcome. Accordingly, it is the primary strategy to decrease cardiovascular mortality by minimizing infarct size. Promotion of collateral artery growth (arteriogenesis) is an appealing option of reducing infarct size. It has been demonstrated in experimental models that tangential fluid shear stress is the major trigger of arterial remodeling and, thus, of collateral growth. Lower-leg, high-pressure external counterpulsation triggered to occur during diastole induces a flow velocity signal and thus tangential endothelial shear stress in addition to the flow signal caused by cardiac stroke volume. We here present two cases of cardiac transplant recipients as human "models" of physical coronary arteriogenesis, providing an example of progressing and regressing clinical arteriogenesis, and review available evidence from clinical studies on other feasible forms of physical arteriogenesis.