23 resultados para Non-evaluation
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Objective: To investigate the prognostic significance of ST-segment elevation (STE) in aVR associated with ST-segment depression (STD) in other leads in patients with non-STE acute coronary syndrome (NSTE-ACS). Background: In NSTE-ACS patients, STD has been extensively associated with severe coronary lesions and poor outcomes. The prognostic role of STE in aVR is uncertain. Methods: We enrolled 888 consecutive patients with NSTE-ACS. They were divided into two groups according to the presence or not on admission ECG of aVR STE≥ 1mm and STD (defined as high risk ECG pattern). The primary and secondary endpoints were: in-hospital cardiovascular (CV) death and the rate of culprit left main disease (LMD). Results: Patients with high risk ECG pattern (n=121) disclosed a worse clinical profile compared to patients (n=575) without [median GRACE (Global-Registry-of-Acute-Coronary-Events) risk score =142 vs. 182, respectively]. A total of 75% of patients underwent coronary angiography. The rate of in-hospital CV death was 3.9%. On multivariable analysis patients who had the high risk ECG pattern showed an increased risk of CV death (OR=2.88, 95%CI 1.05-7.88) and culprit LMD (OR=4.67,95%CI 1.86-11.74) compared to patients who had not. The prognostic significance of the high risk ECG pattern was maintained even after adjustment for the GRACE risk score (OR = 2.28, 95%CI:1.06-4.93 and OR = 4.13, 95%CI:2.13-8.01, for primary and secondary endpoint, respectively). Conclusions: STE in aVR associated with STD in other leads predicts in-hospital CV death and culprit LMD. This pattern may add prognostic information in patients with NSTE-ACS on top of recommended scoring system.
Resumo:
Myocardial perfusion quantification by means of Contrast-Enhanced Cardiac Magnetic Resonance images relies on time consuming frame-by-frame manual tracing of regions of interest. In this Thesis, a novel automated technique for myocardial segmentation and non-rigid registration as a basis for perfusion quantification is presented. The proposed technique is based on three steps: reference frame selection, myocardial segmentation and non-rigid registration. In the first step, the reference frame in which both endo- and epicardial segmentation will be performed is chosen. Endocardial segmentation is achieved by means of a statistical region-based level-set technique followed by a curvature-based regularization motion. Epicardial segmentation is achieved by means of an edge-based level-set technique followed again by a regularization motion. To take into account the changes in position, size and shape of myocardium throughout the sequence due to out of plane respiratory motion, a non-rigid registration algorithm is required. The proposed non-rigid registration scheme consists in a novel multiscale extension of the normalized cross-correlation algorithm in combination with level-set methods. The myocardium is then divided into standard segments. Contrast enhancement curves are computed measuring the mean pixel intensity of each segment over time, and perfusion indices are extracted from each curve. The overall approach has been tested on synthetic and real datasets. For validation purposes, the sequences have been manually traced by an experienced interpreter, and contrast enhancement curves as well as perfusion indices have been computed. Comparisons between automatically extracted and manually obtained contours and enhancement curves showed high inter-technique agreement. Comparisons of perfusion indices computed using both approaches against quantitative coronary angiography and visual interpretation demonstrated that the two technique have similar diagnostic accuracy. In conclusion, the proposed technique allows fast, automated and accurate measurement of intra-myocardial contrast dynamics, and may thus address the strong clinical need for quantitative evaluation of myocardial perfusion.
Resumo:
The research activity carried out during the PhD course in Electrical Engineering belongs to the branch of electric and electronic measurements. The main subject of the present thesis is a distributed measurement system to be installed in Medium Voltage power networks, as well as the method developed to analyze data acquired by the measurement system itself and to monitor power quality. In chapter 2 the increasing interest towards power quality in electrical systems is illustrated, by reporting the international research activity inherent to the problem and the relevant standards and guidelines emitted. The aspect of the quality of voltage provided by utilities and influenced by customers in the various points of a network came out only in recent years, in particular as a consequence of the energy market liberalization. Usually, the concept of quality of the delivered energy has been associated mostly to its continuity. Hence the reliability was the main characteristic to be ensured for power systems. Nowadays, the number and duration of interruptions are the “quality indicators” commonly perceived by most customers; for this reason, a short section is dedicated also to network reliability and its regulation. In this contest it should be noted that although the measurement system developed during the research activity belongs to the field of power quality evaluation systems, the information registered in real time by its remote stations can be used to improve the system reliability too. Given the vast scenario of power quality degrading phenomena that usually can occur in distribution networks, the study has been focused on electromagnetic transients affecting line voltages. The outcome of such a study has been the design and realization of a distributed measurement system which continuously monitor the phase signals in different points of a network, detect the occurrence of transients superposed to the fundamental steady state component and register the time of occurrence of such events. The data set is finally used to locate the source of the transient disturbance propagating along the network lines. Most of the oscillatory transients affecting line voltages are due to faults occurring in any point of the distribution system and have to be seen before protection equipment intervention. An important conclusion is that the method can improve the monitored network reliability, since the knowledge of the location of a fault allows the energy manager to reduce as much as possible both the area of the network to be disconnected for protection purposes and the time spent by technical staff to recover the abnormal condition and/or the damage. The part of the thesis presenting the results of such a study and activity is structured as follows: chapter 3 deals with the propagation of electromagnetic transients in power systems by defining characteristics and causes of the phenomena and briefly reporting the theory and approaches used to study transients propagation. Then the state of the art concerning methods to detect and locate faults in distribution networks is presented. Finally the attention is paid on the particular technique adopted for the same purpose during the thesis, and the methods developed on the basis of such approach. Chapter 4 reports the configuration of the distribution networks on which the fault location method has been applied by means of simulations as well as the results obtained case by case. In this way the performance featured by the location procedure firstly in ideal then in realistic operating conditions are tested. In chapter 5 the measurement system designed to implement the transients detection and fault location method is presented. The hardware belonging to the measurement chain of every acquisition channel in remote stations is described. Then, the global measurement system is characterized by considering the non ideal aspects of each device that can concur to the final combined uncertainty on the estimated position of the fault in the network under test. Finally, such parameter is computed according to the Guide to the Expression of Uncertainty in Measurements, by means of a numeric procedure. In the last chapter a device is described that has been designed and realized during the PhD activity aiming at substituting the commercial capacitive voltage divider belonging to the conditioning block of the measurement chain. Such a study has been carried out aiming at providing an alternative to the used transducer that could feature equivalent performance and lower cost. In this way, the economical impact of the investment associated to the whole measurement system would be significantly reduced, making the method application much more feasible.
Resumo:
This thesis is dedicated to the analysis of non-linear pricing in oligopoly. Non-linear pricing is a fairly predominant practice in most real markets, mostly characterized by some amount of competition. The sophistication of pricing practices has increased in the latest decades due to the technological advances that have allowed companies to gather more and more data on consumers preferences. The first essay of the thesis highlights the main characteristics of oligopolistic non-linear pricing. Non-linear pricing is a special case of price discrimination. The theory of price discrimination has to be modified in presence of oligopoly: in particular, a crucial role is played by the competitive externality that implies that product differentiation is closely related to the possibility of discriminating. The essay reviews the theory of competitive non-linear pricing by starting from its foundations, mechanism design under common agency. The different approaches to model non-linear pricing are then reviewed. In particular, the difference between price and quantity competition is highlighted. Finally, the close link between non-linear pricing and the recent developments in the theory of vertical differentiation is explored. The second essay shows how the effects of non-linear pricing are determined by the relationship between the demand and the technological structure of the market. The chapter focuses on a model in which firms supply a homogeneous product in two different sizes. Information about consumers' reservation prices is incomplete and the production technology is characterized by size economies. The model provides insights on the size of the products that one finds in the market. Four equilibrium regions are identified depending on the relative intensity of size economies with respect to consumers' evaluation of the good. Regions for which the product is supplied in a single unit or in several different sizes or in only a very large one. Both the private and social desirability of non-linear pricing varies across different equilibrium regions. The third essay considers the broadband internet market. Non discriminatory issues seem the core of the recent debate on the opportunity or not of regulating the internet. One of the main questions posed is whether the telecom companies, owning the networks constituting the internet, should be allowed to offer quality-contingent contracts to content providers. The aim of this essay is to analyze the issue through a stylized two-sided market model of the web that highlights the effects of such a discrimination over quality, prices and participation to the internet of providers and final users. An overall welfare comparison is proposed, concluding that the final effects of regulation crucially depend on both the technology and preferences of agents.
Resumo:
The aspartic protease BACE1 (β-amyloid precursor protein cleaving enzyme, β-secretase) is recognized as one of the most promising targets in the treatment of Alzheimer's disease (AD). The accumulation of β-amyloid peptide (Aβ) in the brain is a major factor in the pathogenesis of AD. Aβ is formed by initial cleavage of β-amyloid precursor protein (APP) by β-secretase, therefore BACE1 inhibition represents one of the therapeutic approaches to control progression of AD, by preventing the abnormal generation of Aβ. For this reason, in the last decade, many research efforts have focused at the identification of new BACE1 inhibitors as drug candidates. Generally, BACE1 inhibitors are grouped into two families: substrate-based inhibitors, designed as peptidomimetic inhibitors, and non-peptidomimetic ones. The research on non-peptidomimetic small molecules BACE1 inhibitors remains the most interesting approach, since these compounds hold an improved bioavailability after systemic administration, due to a good blood-brain barrier permeability in comparison to peptidomimetic inhibitors. Very recently, our research group discovered a new promising lead compound for the treatment of AD, named lipocrine, a hybrid derivative between lipoic acid and the AChE inhibitor (AChEI) tacrine, characterized by a tetrahydroacridinic moiety. Lipocrine is one of the first compounds able to inhibit the catalytic activity of AChE and AChE-induced amyloid-β aggregation and to protect against reactive oxygen species. Due to this interesting profile, lipocrine was also evaluated for BACE1 inhibitory activity, resulting in a potent lead compound for BACE1 inhibition. Starting from this interesting profile, a series of tetrahydroacridine analogues were synthesised varying the chain length between the two fragments. Moreover, following the approach of combining in a single molecule two different pharmacophores, we designed and synthesised different compounds bearing the moieties of known AChEIs (rivastigmine and caproctamine) coupled with lipoic acid, since it was shown that dithiolane group is an important structural feature of lipocrine for the optimal inhibition of BACE1. All the tetrahydroacridines, rivastigmine and caproctamine-based compounds, were evaluated for BACE1 inhibitory activity in a FRET (fluorescence resonance energy transfer) enzymatic assay (test A). With the aim to enhancing the biological activity of the lead compound, we applied the molecular simplification approach to design and synthesize novel heterocyclic compounds related to lipocrine, in which the tetrahydroacridine moiety was replaced by 4-amino-quinoline or 4-amino-quinazoline rings. All the synthesized compounds were also evaluated in a modified FRET enzymatic assay (test B), changing the fluorescent substrate for enzymatic BACE1 cleavage. This test method guided deep structure-activity relationships for BACE1 inhibition on the most promising quinazoline-based derivatives. By varying the substituent on the 2-position of the quinazoline ring and by replacing the lipoic acid residue in lateral chain with different moieties (i.e. trans-ferulic acid, a known antioxidant molecule), a series of quinazoline derivatives were obtained. In order to confirm inhibitory activity of the most active compounds, they were evaluated with a third FRET assay (test C) which, surprisingly, did not confirm the previous good activity profiles. An evaluation study of kinetic parameters of the three assays revealed that method C is endowed with the best specificity and enzymatic efficiency. Biological evaluation of the modified 2,4-diamino-quinazoline derivatives measured through the method C, allow to obtain a new lead compound bearing the trans-ferulic acid residue coupled to 2,4-diamino-quinazoline core endowed with a good BACE1 inhibitory activity (IC50 = 0.8 mM). We reported on the variability of the results in the three different FRET assays that are known to have some disadvantages in term of interference rates that are strongly dependent on compound properties. The observed results variability could be also ascribed to different enzyme origin, varied substrate and different fluorescent groups. The inhibitors should be tested on a parallel screening in order to have a more reliable data prior to be tested into cellular assay. With this aim, preliminary cellular BACE1 inhibition assay carried out on lipocrine confirmed a good cellular activity profile (EC50 = 3.7 mM) strengthening the idea to find a small molecule non-peptidomimetic compound as BACE1 inhibitor. In conclusion, the present study allowed to identify a new lead compound endowed with BACE1 inhibitory activity in submicromolar range. Further lead optimization to the obtained derivative is needed in order to obtain a more potent and a selective BACE1 inhibitor based on 2,4-diamino-quinazoline scaffold. A side project related to the synthesis of novel enzymatic inhibitors of BACE1 in order to explore the pseudopeptidic transition-state isosteres chemistry was carried out during research stage at Università de Montrèal (Canada) in Hanessian's group. The aim of this work has been the synthesis of the δ-aminocyclohexane carboxylic acid motif with stereochemically defined substitution to incorporating such a constrained core in potential BACE1 inhibitors. This fragment, endowed with reduced peptidic character, is not known in the context of peptidomimetic design. In particular, we envisioned an alternative route based on an organocatalytic asymmetric conjugate addition of nitroalkanes to cyclohexenone in presence of D-proline and trans-2,5-dimethylpiperazine. The enantioenriched obtained 3-(α-nitroalkyl)-cyclohexanones were further functionalized to give the corresponding δ-nitroalkyl cyclohexane carboxylic acids. These intermediates were elaborated to the target structures 3-(α-aminoalkyl)-1-cyclohexane carboxylic acids in a new readily accessible way.
Resumo:
Bread dough and particularly wheat dough, due to its viscoelastic behaviour, is probably the most dynamic and complicated rheological system and its characteristics are very important since they highly affect final products’ textural and sensorial properties. The study of dough rheology has been a very challenging task for many researchers since it can provide numerous information about dough formulation, structure and processing. This explains why dough rheology has been a matter of investigation for several decades. In this research rheological assessment of doughs and breads was performed by using empirical and fundamental methods at both small and large deformation, in order to characterize different types of doughs and final products such as bread. In order to study the structural aspects of food products, image analysis techniques was used for the integration of the information coming from empirical and fundamental rheological measurements. Evaluation of dough properties was carried out by texture profile analysis (TPA), dough stickiness (Chen and Hoseney cell) and uniaxial extensibility determination (Kieffer test) by using a Texture Analyser; small deformation rheological measurements, were performed on a controlled stress–strain rheometer; moreover the structure of different doughs was observed by using the image analysis; while bread characteristics were studied by using texture profile analysis (TPA) and image analysis. The objective of this research was to understand if the different rheological measurements were able to characterize and differentiate the different samples analysed. This in order to investigate the effect of different formulation and processing conditions on dough and final product from a structural point of view. For this aim the following different materials were performed and analysed: - frozen dough realized without yeast; - frozen dough and bread made with frozen dough; - doughs obtained by using different fermentation method; - doughs made by Kamut® flour; - dough and bread realized with the addition of ginger powder; - final products coming from different bakeries. The influence of sub-zero storage time on non-fermented and fermented dough viscoelastic performance and on final product (bread) was evaluated by using small deformation and large deformation methods. In general, the longer the sub-zero storage time the lower the positive viscoelastic attributes. The effect of fermentation time and of different type of fermentation (straight-dough method; sponge-and-dough procedure and poolish method) on rheological properties of doughs were investigated using empirical and fundamental analysis and image analysis was used to integrate this information throughout the evaluation of the dough’s structure. The results of fundamental rheological test showed that the incorporation of sourdough (poolish method) provoked changes that were different from those seen in the others type of fermentation. The affirmative action of some ingredients (extra-virgin olive oil and a liposomic lecithin emulsifier) to improve rheological characteristics of Kamut® dough has been confirmed also when subjected to low temperatures (24 hours and 48 hours at 4°C). Small deformation oscillatory measurements and large deformation mechanical tests performed provided useful information on the rheological properties of samples realized by using different amounts of ginger powder, showing that the sample with the highest amount of ginger powder (6%) had worse rheological characteristics compared to the other samples. Moisture content, specific volume, texture and crumb grain characteristics are the major quality attributes of bread products. The different sample analyzed, “Coppia Ferrarese”, “Pane Comune Romagnolo” and “Filone Terra di San Marino”, showed a decrease of crumb moisture and an increase in hardness over the storage time. Parameters such as cohesiveness and springiness, evaluated by TPA that are indicator of quality of fresh bread, decreased during the storage. By using empirical rheological tests we found several differences among the samples, due to the different ingredients used in formulation and the different process adopted to prepare the sample, but since these products are handmade, the differences could be account as a surplus value. In conclusion small deformation (in fundamental units) and large deformation methods showed a significant role in monitoring the influence of different ingredients used in formulation, different processing and storage conditions on dough viscoelastic performance and on final product. Finally the knowledge of formulation, processing and storage conditions together with the evaluation of structural and rheological characteristics is fundamental for the study of complex matrices like bakery products, where numerous variable can influence their final quality (e.g. raw material, bread-making procedure, time and temperature of the fermentation and baking).
Resumo:
Hydrothermal fluids are a fundamental resource for understanding and monitoring volcanic and non-volcanic systems. This thesis is focused on the study of hydrothermal system through numerical modeling with the geothermal simulator TOUGH2. Several simulations are presented, and geophysical and geochemical observables, arising from fluids circulation, are analyzed in detail throughout the thesis. In a volcanic setting, fluids feeding fumaroles and hot spring may play a key role in the hazard evaluation. The evolution of the fluids circulation is caused by a strong interaction between magmatic and hydrothermal systems. A simultaneous analysis of different geophysical and geochemical observables is a sound approach for interpreting monitored data and to infer a consistent conceptual model. Analyzed observables are ground displacement, gravity changes, electrical conductivity, amount, composition and temperature of the emitted gases at surface, and extent of degassing area. Results highlight the different temporal response of the considered observables, as well as the different radial pattern of variation. However, magnitude, temporal response and radial pattern of these signals depend not only on the evolution of fluid circulation, but a main role is played by the considered rock properties. Numerical simulations highlight differences that arise from the assumption of different permeabilities, for both homogeneous and heterogeneous systems. Rock properties affect hydrothermal fluid circulation, controlling both the range of variation and the temporal evolution of the observable signals. Low temperature fumaroles and low discharge rate may be affected by atmospheric conditions. Detailed parametric simulations were performed, aimed to understand the effects of system properties, such as permeability and gas reservoir overpressure, on diffuse degassing when air temperature and barometric pressure changes are applied to the ground surface. Hydrothermal circulation, however, is not only a characteristic of volcanic system. Hot fluids may be involved in several mankind problems, such as studies on geothermal engineering, nuclear waste propagation in porous medium, and Geological Carbon Sequestration (GCS). The current concept for large-scale GCS is the direct injection of supercritical carbon dioxide into deep geological formations which typically contain brine. Upward displacement of such brine from deep reservoirs driven by pressure increases resulting from carbon dioxide injection may occur through abandoned wells, permeable faults or permeable channels. Brine intrusion into aquifers may degrade groundwater resources. Numerical results show that pressure rise drives dense water up to the conduits, and does not necessarily result in continuous flow. Rather, overpressure leads to new hydrostatic equilibrium if fluids are initially density stratified. If warm and salty fluid does not cool passing through the conduit, an oscillatory solution is then possible. Parameter studies delineate steady-state (static) and oscillatory solutions.
Resumo:
Aim: To evaluate the early response to treatment to an antiangiogenetic drug (sorafenib) in a heterotopic murine model of hepatocellular carcinoma (HCC) using ultrasonographic molecular imaging. Material and Methods: the xenographt model was established injecting a suspension of HuH7 cells subcutaneously in 19 nude mice. When tumors reached a mean diameter of 5-10 mm, they were divided in two groups (treatment and vehicle). The treatment group received sorafenib (62 mg/kg) by daily oral gavage for 14 days. Molecular imaging was performed using contrast enhanced ultrasound (CEUS), by injecting into the mouse venous circulation a suspension of VEGFR-2 targeted microbubbles (BR55, kind gift of Bracco Swiss, Geneve, Switzerland). Video clips were acquired for 6 minutes, then microbubbles (MBs) were destroyed by a high mechanical index (MI) impulse, and another minute was recorded to evaluate residual circulating MBs. The US protocol was repeated at day 0,+2,+4,+7, and +14 from the beginning of treatment administration. Video clips were analyzed using a dedicated software (Sonotumor, Bracco Swiss) to quantify the signal of the contrast agent. Time/intensity curves were obtained and the difference of the mean MBs signal before and after high MI impulse (Differential Targeted Enhancement-dTE) was calculated. dTE represents a numeric value in arbitrary units proportional to the amount of bound MBs. At day +14 mice were euthanized and the tumors analyzed for VEGFR-2, pERK, and CD31 tissue levels using western blot analysis. Results: dTE values decreased from day 0 to day +14 both in treatment and vehicle groups, and they were statistically higher in vehicle group than in treatment group at day +2, at day +7, and at day +14. With respect to the degree of tumor volume increase, measured as growth percentage delta (GPD), treatment group was divided in two sub-groups, non-responders (GPD>350%), and responders (GPD<200%). In the same way vehicle group was divided in slow growth group (GPD<400%), and fast growth group (GPD>900%). dTE values at day 0 (immediately before treatment start) were higher in non-responders than in responders group, with statistical difference at day 2. While dTE values were higher in the fast growth group than in the slow growth group only at day 0. A significant positive correlation was found between VEGFR-2 tissue levels and dTE values, confirming that level of BR55 tissue enhancement reflects the amount of tissue VEGF receptor. Conclusions: the present findings show that, at least in murine experimental models, CEUS with BR55 is feasable and appears to be a useful tool in the prediction of tumor growth and response to sorafenib treatment in xenograft HCC.
Resumo:
The "sustainability" concept relates to the prolonging of human economic systems with as little detrimental impact on ecological systems as possible. Construction that exhibits good environmental stewardship and practices that conserve resources in a manner that allow growth and development to be sustained for the long-term without degrading the environment are indispensable in a developed society. Past, current and future advancements in asphalt as an environmentally sustainable paving material are especially important because the quantities of asphalt used annually in Europe as well as in the U.S. are large. The asphalt industry is still developing technological improvements that will reduce the environmental impact without affecting the final mechanical performance. Warm mix asphalt (WMA) is a type of asphalt mix requiring lower production temperatures compared to hot mix asphalt (HMA), while aiming to maintain the desired post construction properties of traditional HMA. Lowering the production temperature reduce the fuel usage and the production of emissions therefore and that improve conditions for workers and supports the sustainable development. Even the crumb-rubber modifier (CRM), with shredded automobile tires and used in the United States since the mid 1980s, has proven to be an environmentally friendly alternative to conventional asphalt pavement. Furthermore, the use of waste tires is not only relevant in an environmental aspect but also for the engineering properties of asphalt [Pennisi E., 1992]. This research project is aimed to demonstrate the dual value of these Asphalt Mixes in regards to the environmental and mechanical performance and to suggest a low environmental impact design procedure. In fact, the use of eco-friendly materials is the first phase towards an eco-compatible design but it cannot be the only step. The eco-compatible approach should be extended also to the design method and material characterization because only with these phases is it possible to exploit the maximum potential properties of the used materials. Appropriate asphalt concrete characterization is essential and vital for realistic performance prediction of asphalt concrete pavements. Volumetric (Mix design) and mechanical (Permanent deformation and Fatigue performance) properties are important factors to consider. Moreover, an advanced and efficient design method is necessary in order to correctly use the material. A design method such as a Mechanistic-Empirical approach, consisting of a structural model capable of predicting the state of stresses and strains within the pavement structure under the different traffic and environmental conditions, was the application of choice. In particular this study focus on the CalME and its Incremental-Recursive (I-R) procedure, based on damage models for fatigue and permanent shear strain related to the surface cracking and to the rutting respectively. It works in increments of time and, using the output from one increment, recursively, as input to the next increment, predicts the pavement conditions in terms of layer moduli, fatigue cracking, rutting and roughness. This software procedure was adopted in order to verify the mechanical properties of the study mixes and the reciprocal relationship between surface layer and pavement structure in terms of fatigue and permanent deformation with defined traffic and environmental conditions. The asphalt mixes studied were used in a pavement structure as surface layer of 60 mm thickness. The performance of the pavement was compared to the performance of the same pavement structure where different kinds of asphalt concrete were used as surface layer. In comparison to a conventional asphalt concrete, three eco-friendly materials, two warm mix asphalt and a rubberized asphalt concrete, were analyzed. The First Two Chapters summarize the necessary steps aimed to satisfy the sustainable pavement design procedure. In Chapter I the problem of asphalt pavement eco-compatible design was introduced. The low environmental impact materials such as the Warm Mix Asphalt and the Rubberized Asphalt Concrete were described in detail. In addition the value of a rational asphalt pavement design method was discussed. Chapter II underlines the importance of a deep laboratory characterization based on appropriate materials selection and performance evaluation. In Chapter III, CalME is introduced trough a specific explanation of the different equipped design approaches and specifically explaining the I-R procedure. In Chapter IV, the experimental program is presented with a explanation of test laboratory devices adopted. The Fatigue and Rutting performances of the study mixes are shown respectively in Chapter V and VI. Through these laboratory test data the CalME I-R models parameters for Master Curve, fatigue damage and permanent shear strain were evaluated. Lastly, in Chapter VII, the results of the asphalt pavement structures simulations with different surface layers were reported. For each pavement structure, the total surface cracking, the total rutting, the fatigue damage and the rutting depth in each bound layer were analyzed.
Resumo:
The objective of this study was to evaluate right ventricular function in patients with right ventricular volume overload in patients with (tetralogy of Fallot, and pulmonary atresia + VSD ) underwent corrective surgery; with echocardiography measure that can be easily applied; and to study the relationship between ProBNP and the contractile function of the right ventricle, dilated right atrium, and the consequences of pulmonary insufficiency . Methods: The study included 50 patients (50% males, mean age 30.64 ± 13.30 years) with prior cardiac surgical intervention of TDF (90%) or pulmonary atresia + VSD (10%). (49 pz) have performed a cardiac MRI and clinical evaluation, (47 pz) echocardiogram, (48 pz) ECG, (34 pz) a cardiopulmonary exercise testing, (29 pz) a dosage of ProBNP. Results: The S-wave velocity (p <0.0001), the TAPSE (p <0.0001) correlated significantly with RVEF estimated by cardiac MRI. The VO2 max was 27.93 ± 12.91 ml / kg / min, 15% of patients had VE/VCO2 The peak> 35. ProBNP correlated positively and significantly with the area of the right atrium (p = 0.0001), and negative and significant with VO2 max (p = 0.04). Those who have increased pulmonary insufficiency (PVR fraction> 30%) have a significantly increased RVED volume (p = 0.01), reduced VO2 max (p = 0.04), and lower ejection fraction of LV (p = 0.02) than the group of patients with PVR ≤ 30. Conclusion: The TAPSE and S-wave velocity are fundamental and may become the technique of choice for routine assessment of RV systolic function in adult patients with TOF. The monitoring of the Pro BNP is probably a choice, given the simplicity and their information that correlate with the test cardiopulmonary. In view of the ventricular-ventricular interaction, so measures to maintain or restore the functioning of the pulmonary valve could preserve biventricular function.
Resumo:
Oncolytic virotherapy exploits the ability of viruses to infect and kill cells. It is suitable as treatment for tumors that are not accessible by surgery and/or respond poorly to the current therapeutic approach. HSV is a promising oncolytic agent. It has a large genome size able to accommodate large transgenes and some attenuated oncolytic HSVs (oHSV) are already in clinical trials phase I and II. The aim of this thesis was the generation of HSV-1 retargeted to tumor-specific receptors and detargeted from HSV natural receptors, HVEM and Nectin-1. The retargeting was achieved by inserting a specific single chain antibody (scFv) for the tumor receptor selected inside the HSV glycoprotein gD. In this research three tumor receptors were considered: epidermal growth factor receptor 2 (HER2) overexpressed in 25-30% of breast and ovarian cancers and gliomas, prostate specific membrane antigen (PSMA) expressed in prostate carcinomas and in neovascolature of solid tumors; and epidermal growth factor receptor variant III (EGFRvIII). In vivo studies on HER2 retargeted viruses R-LM113 and R-LM249 have demonstrated their high safety profile. For R-LM249 the antitumor efficacy has been highlighted by target-specific inhibition of the growth of human tumors in models of HER2-positive breast and ovarian cancer in nude mice. In a murine model of HER2-positive glioma in nude mice, R-LM113 was able to significantly increase the survival time of treated mice compared to control. Up to now, PSMA and EGFRvIII viruses (R-LM593 and R-LM613) are only characterized in vitro, confirming the specific retargeting to selected targets. This strategy has proved to be generally applicable to a broad spectrum of receptors for which a single chain antibody is available.
Resumo:
Shellfish are filter-feeding organisms that can accumulate many bacteria and viruses. Considering that depuration procedures are not effective in removal of certain microorganisms, shellfish-borne diseases are frequent in many parts of the world, and their control must rely primarily on investigation of prevalence of human pathogens in shellfish and water environment. However, the diffusion of enteric viruses and Vibrio bacteria is not known in many geographical areas, for example in Sardinia, Italy. A survey aimed at investigating the prevalence of Norovirus (NoV), hepatitis A virus (HAV), V. parahaemolyticus, V. cholerae and V. vulnificus was carried out, analyzing both local and imported purified, non-purified and retail shellfish from North Italy and Sardinia. Shellfish from both areas were found contaminated by NoVs, HAV and Vibrio, including retail and purified animals. Molecular analysis evidenced different NoV genogroups and genotypes, including bovine NoVs, as well as pathogenic Vibrio strains, underlining the risk for shellfish consumers. However, also other approaches are needed to control the diffusion of shellfish-borne diseases. It was originally thought that enteric viruses are passively accumulated by shellfish. Recently, it was proven that NoVs bind to specific carbohydrate ligands in oysters, and various NoV strains are characterized by a different bioaccumulation pattern. To deepen the knowledge on this argument, a study was carried out, analyzing bioaccumulation of up to 8 different NoV strains in four different species of shellfish. Different bioaccumulation patterns were observed for each shellfish species and NoV strain used, potentially important in setting up effective shellfish purification protocols. Finally, a novel study of evaluation of viral contamination in shellfish from the French Atlantic coast was carried out following the passage of Xynthia tempest over Western Europe which caused massive destruction. Different enteric viruses were found over a one month period, evidencing the potential of these events of contaminating shellfish.
Resumo:
The evaluation of structural performance of existing concrete buildings, built according to standards and materials quite different to those available today, requires procedures and methods able to cover lack of data about mechanical material properties and reinforcement detailing. To this end detailed inspections and test on materials are required. As a consequence tests on drilled cores are required; on the other end, it is stated that non-destructive testing (NDT) cannot be used as the only mean to get structural information, but can be used in conjunction with destructive testing (DT) by a representative correlation between DT and NDT. The aim of this study is to verify the accuracy of some formulas of correlation available in literature between measured parameters, i.e. rebound index, ultrasonic pulse velocity and compressive strength (SonReb Method). To this end a relevant number of DT and NDT tests has been performed on many school buildings located in Cesena (Italy). The above relationships have been assessed on site correlating NDT results to strength of core drilled in adjacent locations. Nevertheless, concrete compressive strength assessed by means of NDT methods and evaluated with correlation formulas has the advantage of being able to be implemented and used for future applications in a much more simple way than other methods, even if its accuracy is strictly limited to the analysis of concretes having the same characteristics as those used for their calibration. This limitation warranted a search for a different evaluation method for the non-destructive parameters obtained on site. To this aim, the methodology of neural identification of compressive strength is presented. Artificial Neural Network (ANN) suitable for the specific analysis were chosen taking into account the development presented in the literature in this field. The networks were trained and tested in order to detect a more reliable strength identification methodology.
Resumo:
Introduzione: le Coliti Microscopiche, altrimenti note come Colite Collagena e Colite Linfocitica, sono disordini infiammatori cronici del colon che causano diarrea e colpiscono più frequentemente donne in età avanzata e soggetti in terapia farmacologica. Negli ultimi anni la loro incidenza sembra aumentata in diversi paesi occidentali ma la prevalenza in Italia è ancora incerta. Scopo: il presente studio prospettico e multicentrico è stato disegnato per valutare la prevalenza delle CM in pazienti sottoposti a colonscopia per diarrea cronica non ematica. Pazienti e metodi: dal Maggio 2010 al Settembre 2010 sono stati arruolati consecutivamente tutti i soggetti adulti afferenti in due strutture dell’area metropolitana milanese per eseguire una pancolonscopia. Nei soggetti con diarrea cronica non ematica sono state eseguite biopsie multiple nel colon ascendente, sigma e retto nonché in presenza di lesioni macroscopiche. Risultati: delle 8008 colonscopie esaminate 265 sono state eseguite per diarrea cronica; tra queste, 8 presentavano informazioni incomplete, 52 riscontri endoscopici consistenti con altri disordini intestinali (i.e. IBD, tumori, diverticoliti). 205 colonscopie sono risultate sostanzialmente negative, 175 dotate di adeguato campionamento microscopico (M:F=70:105; età mediana 61 anni). L’analisi istologica ha permesso di documentare 38 nuovi casi di CM (M:F=14:24; età mediana 67.5 anni): 27 CC (M:F=10:17; età mediana 69 anni) e 11 CL (M:F=4:7; età mediana 66 anni). In altri 25 casi sono state osservate alterazioni microscopiche prive dei sufficienti requisiti per la diagnosi di CM. Conclusioni: nel presente studio l’analisi microscopica del colon ha identificato la presenza di CM nel 21,7% dei soggetti con diarrea cronica non ematica ed indagine pancolonscopica negativa. Lo studio microscopico del colon è pertanto un passo diagnostico fondamentale per il corretto inquadramento diagnostico delle diarree croniche, specialmente dopo i 60 anni di età. Ampi studi prospettici e multicentrici dovranno chiarire ruolo e peso dei fattori di rischio associati a questi disordini.