990 resultados para 890
Resumo:
Objective To compare autoantibody features in patients with primary biliary cirrhosis (PBC) and individuals presenting antimitochondria antibodies (AMAs) but no clinical or biochemical evidence of disease. Methods A total of 212 AMA-positive serum samples were classified into four groups: PBC (definite PBC, n = 93); PBC/autoimmune disease (AID; PBC plus other AID, n = 37); biochemically normal (BN) individuals (n = 61); and BN/AID (BN plus other AID, n = 21). Samples were tested by indirect immunofluorescence (IIF) on rat kidney (IIF-AMA) and ELISA [antibodies to pyruvate dehydrogenase E2-complex (PDC-E2), gp-210, Sp-100, and CENP-A/B]. AMA isotype was determined by IIF-AMA. Affinity of anti-PDC-E2 IgG was determined by 8 M urea-modified ELISA. Results High-titer IIF-AMA was more frequent in PBC and PBC/AID (57 and 70 %) than in BN and BN/AID samples (23 and 19 %) (p < 0.001). Triple isotype IIF-AMA (IgA/IgM/IgG) was more frequent in PBC and PBC/AID samples (35 and 43 %) than in BN sample (18 %; p = 0.008; p = 0.013, respectively). Anti-PDC-E2 levels were higher in PBC (mean 3.82; 95 % CI 3.36–4.29) and PBC/AID samples (3.89; 3.15–4.63) than in BN (2.43; 1.92–2.94) and BN/AID samples (2.52; 1.54–3.50) (p < 0.001). Anti-PDC-E2 avidity was higher in PBC (mean 64.5 %; 95 % CI 57.5–71.5 %) and PBC/AID samples (66.1 %; 54.4–77.8 %) than in BN samples (39.2 %; 30.9–37.5 %) (p < 0.001). PBC and PBC/AID recognized more cell domains (mitochondria, nuclear envelope, PML/sp-100 bodies, centromere) than BN (p = 0.008) and BN/AID samples (p = 0.002). Three variables were independently associated with established PBC: high-avidity anti-PDC-E2 (OR 4.121; 95 % CI 2.118–8.019); high-titer IIF-AMA (OR 4.890; 2.319–10.314); antibodies to three or more antigenic cell domains (OR 9.414; 1.924–46.060). Conclusion The autoantibody profile was quantitatively and qualitatively more robust in definite PBC as compared with AMA-positive biochemically normal individuals.
Resumo:
La ricerca si propone d’indagare sul concetto di “congruità” riferito alle trasformazioni di specifici contesti urbani, e di definire quindi un metodo “non arbitrario” per la valutazione di opere esistenti o in progetto, al fine di riconoscerne il carattere di congruità o, al contrario, d’incongruità. Interventi d’inserimento e di trasformazione, alla scala del comparto urbanistico o anche alla scala edilizia, possono presentarsi come congrui o incongrui rispetto all’identità del luogo di appartenenza (organismo a scala urbana o territoriale). Congrua risulta l’opera che non si pone in (conclamato) contrasto rispetto ai caratteri identitari del contesto. Le definizioni d’incongruità e di opera incongrua, divengono il metro di giudizio del rapporto tra un intervento ed il suo contesto, e si applicano mediante una valutazione che sia metodologicamente fondata e verificata. La valutazione di congruità-incongruità può riferirsi a opere esistenti già realizzate, oppure a progetti di nuove opere; in questo secondo approccio il metodo di valutazione si configura come linea-guida per l’orientamento del progetto stesso in termini di congruità rispetto al contesto. In una fase iniziale la ricerca ha fissato i principi di base, con la definizione di ciò che deve intendersi per congruità e per profilo di congruità. La specifica di congruità, non potendosi basare su una consolidata letteratura (il concetto nei termini descritti è stato introdotto dalla legge 16/2002 della Regione Emilia-Romagna; la Regione stessa riconosce che il concetto è in fase di precisazione tramite sperimentazioni, studi e interventi pilota), muove dallo studio dei concetti di luogo, caratteri del luogo, identità del luogo, contesto urbano, trasformazione dell’ambiente costruito, tutela del patrimonio edilizio, sviluppo tipologico, e superfetazione incongrua. Questi concetti, pur mutuati da ambiti di ricerca affini, costituiscono i presupposti per la definizione di congruità delle trasformazioni di contesti urbani, rispetto all’identità del luogo, tramite la tutela e valorizzazione dei suoi caratteri tipologici costitutivi. Successivamente, la ricerca ha affrontato l’analisi di taluni casi-tipo di opere incongrue. A tale scopo sono stati scelti quattro casi-tipo d’interventi per rimozione di opere ritenute incongrue, indagando la metodologia di valutazione di congruità in essi applicata. Inoltre è stata sperimentata l’applicazione del metodo di valutazione per “categorie di alterazioni” tramite lo studio del centro storico di Reggio Emilia, assunto come contesto urbano campione. Lo studio analitico è sviluppato attraverso l’indagine del rapporto tra edifici e caratteri del contesto, individuando e classificando gli edifici ritenuti incongrui. Qui sono emersi i limiti del metodo di individuazione delle incongruità per categorie di alterazioni; di fatto le alterazioni definite a priori rispetto al contesto, determinano un giudizio arbitrario, in quanto disancorato dai caratteri del luogo. La definizione di ciò che è congruo o incongruo deve invece riferirsi a uno specifico contesto, e le alterazioni dei caratteri che rappresentano l’identità del luogo non possono definirsi a priori generalizzandone i concetti. Completando la ricerca nella direzione del risultato proposto, si è precisato il metodo di valutazione basato sulla coincidenza dei concetti di congruità e di pertinenza di fase, in rapporto allo sviluppo tipologico del contesto. La conoscenza del contesto nei suoi caratteri tipologici, è già metodo di valutazione: nella misura in cui sia possibile effettuare un confronto fra contesto ed opera da valutare. La valutazione non si pone come vincolo all’introduzione di nuove forme che possano rappresentare un’evoluzione dell’esistente, aggiornando il processo di sviluppo tipologico in relazione alle mutazioni del quadro esigenzialeprestazionale, ma piuttosto come barriera alle trasformazioni acritiche nei confronti del contesto, che si sovrappongano o ne cancellino inconsapevolmente i segni peculiari e identitari. In ultima analisi, ai fini dell’applicabilità dei concetti esposti, la ricerca indaga sulla convergenza tra metodo proposto e possibili procedure applicative; in questo senso chiarisce come sia auspicabile definire la congruità in relazione a procedure valutative aperte. Lo strumento urbanistico, inteso come sistema di piani alle diverse scale, è l’ambito idoneo a recepire la lettura della stratificazione dei segni indentitari rilevabili in un contesto; lettura che si attua tramite processi decisionali partecipati, al fine di estendere alla collettività la definizione d’identità culturale del luogo. La valutazione specifica di opere o progetti richiede quindi una procedura aperta, similmente alla procedura di valutazione in vigore presso le soprintendenze, basandosi sul concetto di responsabilità del progettista e del valutatore, in riferimento alla responsabilità della collettività, espressa invece nello strumento urbanistico. Infatti la valutazione di tipo oggettivo, basata sul riferimento a regolamenti o schemi precostituiti, confligge con il senso della valutazione metodologicamente fondata che, al contrario, è assunto teorico basilare della ricerca.
Resumo:
Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.
Resumo:
Die vorliegende Arbeit beschäftigt sich mit der Entwicklung eines Funktionsapproximators und dessen Verwendung in Verfahren zum Lernen von diskreten und kontinuierlichen Aktionen: 1. Ein allgemeiner Funktionsapproximator – Locally Weighted Interpolating Growing Neural Gas (LWIGNG) – wird auf Basis eines Wachsenden Neuralen Gases (GNG) entwickelt. Die topologische Nachbarschaft in der Neuronenstruktur wird verwendet, um zwischen benachbarten Neuronen zu interpolieren und durch lokale Gewichtung die Approximation zu berechnen. Die Leistungsfähigkeit des Ansatzes, insbesondere in Hinsicht auf sich verändernde Zielfunktionen und sich verändernde Eingabeverteilungen, wird in verschiedenen Experimenten unter Beweis gestellt. 2. Zum Lernen diskreter Aktionen wird das LWIGNG-Verfahren mit Q-Learning zur Q-LWIGNG-Methode verbunden. Dafür muss der zugrunde liegende GNG-Algorithmus abgeändert werden, da die Eingabedaten beim Aktionenlernen eine bestimmte Reihenfolge haben. Q-LWIGNG erzielt sehr gute Ergebnisse beim Stabbalance- und beim Mountain-Car-Problem und gute Ergebnisse beim Acrobot-Problem. 3. Zum Lernen kontinuierlicher Aktionen wird ein REINFORCE-Algorithmus mit LWIGNG zur ReinforceGNG-Methode verbunden. Dabei wird eine Actor-Critic-Architektur eingesetzt, um aus zeitverzögerten Belohnungen zu lernen. LWIGNG approximiert sowohl die Zustands-Wertefunktion als auch die Politik, die in Form von situationsabhängigen Parametern einer Normalverteilung repräsentiert wird. ReinforceGNG wird erfolgreich zum Lernen von Bewegungen für einen simulierten 2-rädrigen Roboter eingesetzt, der einen rollenden Ball unter bestimmten Bedingungen abfangen soll.
Resumo:
In this present work high quality PMMA opals with different sphere sizes, silica opals from large size spheres, multilayer opals, and inverse opals were fabricated. Highly monodisperse PMMA spheres were synthesized by surfactant-free emulsion polymerization (polydispersity ~2%). Large-area and well-ordered PMMA crystalline films with a homogenous thickness were produced by the vertical deposition method using a drawing device. Optical experiments have confirmed the high quality of these PMMA photonic crystals, e.g., well resolved high-energy bands of the transmission and reflectance spectra of the opaline films were observed. For fabrication of high quality opaline photonic crystals from large silica spheres (diameter of 890 nm), self-assembled in patterned Si-substrates a novel technique has been developed, in which the crystallization was performed by using a drawing apparatus in combination with stirring. The achievements comprise a spatial selectivity of opal crystallization without special treatment of the wafer surface, the opal lattice was found to match the pattern precisely in width as well as depth, particularly an absence of cracks within the size of the trenches, and finally a good three-dimensional order of the opal lattice even in trenches with a complex confined geometry. Multilayer opals from opaline films with different sphere sizes or different materials were produced by sequential crystallization procedure. Studies of the transmission in triple-layer hetero-opal revealed that its optical properties cannot only be considered as the linear superposition of two independent photonic bandgaps. The remarkable interface effect is the narrowing of the transmission minima. Large-area, high-quality, and robust photonic opal replicas from silicate-based inorganic-organic hybrid polymers (ORMOCER® s) were prepared by using the template-directed method, in which a high quality PMMA opal template was infiltrated with a neat inorganic-organic ORMOCER® oligomer, which can be photopolymerized within the opaline voids leading to a fully-developed replica structure with a filling factor of nearly 100%. This opal replica is structurally homogeneous, thermally and mechanically stable and the large scale (cm2 size) replica films can be handled easily as free films with a pair of tweezers.
Resumo:
The Oxford Programme for Immunomodulatory Immunoglobulin Therapy has been operating since 1992 at Oxford Radcliffe Hospitals in the UK. Initially, this program was set up for patients with multifocal motor neuropathy or chronic inflammatory demyelinating poly-neuropathy to receive reduced doses of intravenous immunoglobulin (IVIG) in clinic on a regular basis (usually every 3 weeks). The program then rapidly expanded to include self-infusion at home, which monitoring showed to be safe and effective. It has been since extended to the treatment of other autoimmune diseases in which IVIG has been shown to be efficacious.
Resumo:
Bacterial meningitis causes persisting neurofunctional sequelae. Theoccurrence of apoptotic cell death in the hippocampal subgranular zone of the dentate gyrus characterizes the disease in patients and relates to deficits in learning and memory in corresponding experimental models. Here, we investigated why neurogenesis fails to regenerate the damage in the hippocampus associated with the persistence of neurofunctional deficits. In an infant rat model of bacterial meningitis, the capacity of hippocampal-derived cells to multiply and form neurospheres was significantly impaired comparedto that in uninfected littermates. In an in vitro model of differentiating hippocampal cells, challenges characteristic of bacterial meningitis (i.e. bacterial components, tumor necrosis factor [20 ng/mL], or growth factor deprivation) caused significantly more apoptosis in stem/progenitor cells and immature neurons than in mature neurons. These results demonstrate that bacterial meningitis injures hippocampal stem and progenitor cells, a finding that may explain the persistence of neurofunctional deficits after bacterial meningitis.
Resumo:
The current study investigated the effects of supplementing rumen-protected choline (RPC) on metabolic profile, selected liver constituents and transcript levels of selected enzymes, transcription factors and nuclear receptors involved in mammary lipid metabolism in dairy goats. Eight healthy lactating goats were studied: four received no choline supplementation (CTR group) and four received 4g RPC chloride/day (RPC group). The treatment was administered individually starting 4 weeks before expected kidding and continuing for 4 weeks after parturition. In the first month of lactation, milk yield and composition were measured weekly. On days 7, 14, 21 and 27 of lactation, blood samples were collected and analysed for glucose, beta-hydroxybutyrate, non-esterified fatty acids and cholesterol. On day 28 of lactation, samples of liver and mammary gland tissue were obtained. Liver tissue was analysed for total lipid and DNA content; mammary tissue was analysed for transcripts of lipoprotein lipase (LPL), fatty acid synthase (FAS), sterol regulatory binding proteins 1 and 2, peroxisome proliferator-activated receptor gamma and liver X receptor alpha. Milk yield was very similar in the two groups, but R PC goats had lower (P < 0.05) plasma beta-hydroxybutyrate. The total lipid content of liver was unaffected (P = 0.890), but the total lipid/DNA ratio was lower (both P < 0.05) in RPC than CTR animals. Choline had no effect on the expression of the mammary gland transcripts involved in lipid metabolism. The current plasma and liver data indicate that choline has a positive effect on liver lipid metabolism, whereas it appears to have little effect on transcript levels in mammary gland of various proteins involved in lipid metabolism. Nevertheless, the current results were obtained from a limited number of animals, and choline requirement and function in lactating dairy ruminants deserve further investigation.
Resumo:
Recent advances in the field of statistical learning have established that learners are able to track regularities of multimodal stimuli, yet it is unknown whether the statistical computations are performed on integrated representations or on separate, unimodal representations. In the present study, we investigated the ability of adults to integrate audio and visual input during statistical learning. We presented learners with a speech stream synchronized with a video of a speaker's face. In the critical condition, the visual (e.g., /gi/) and auditory (e.g., /mi/) signals were occasionally incongruent, which we predicted would produce the McGurk illusion, resulting in the perception of an audiovisual syllable (e.g., /ni/). In this way, we used the McGurk illusion to manipulate the underlying statistical structure of the speech streams, such that perception of these illusory syllables facilitated participants' ability to segment the speech stream. Our results therefore demonstrate that participants can integrate audio and visual input to perceive the McGurk illusion during statistical learning. We interpret our findings as support for modality-interactive accounts of statistical learning.