271 resultados para Virkkunen, Gia
Resumo:
La tecnica di posizionamento GPS costituisce oggi un importante strumento per lo studio di diversi processi tettonici e geodinamici a diverse scale spaziali. E’ possibile infatti usare dati GPS per studiare sia i movimenti delle placche tettoniche a scala globale, sia i lenti movimenti relativi attraverso singole faglie. I campi di applicazione del GPS sono aumentati considerevolmente negli ultimi anni grazie all’incremento del numero di stazioni distribuite sulla Terra e al perfezionamento delle procedure di misura ed elaborazione dati. Tuttavia mentre le velocità di spostamento orizzontali sono da tempo largamente studiate, quelle verticali non lo sono altrettanto perché richiedono il raggiungimento di precisioni sub millimetriche e risultano inoltre affette da numerose fonti di errore. Lo scopo di questo lavoro è quello di ricavare i tassi di deformazione verticale della litosfera in Europa Occidentale a partire dalle misure di fase GPS, passando per l’analisi delle serie temporali, e di correggere tali velocità per il contributo dell’aggiustamento glacio-isostatico (Glacial Isostatic Adjustment, GIA), modellato utilizzando il software SELEN (SEa Level EquatioN solver). Quello che si ottiene è un campo di velocità depurato dal contributo del GIA e rappresentante una stima dei tassi di deformazione verticale relativi all’area in esame associati a processi deformativi diversi dal GIA.
Resumo:
La perdita di pacchetti durante una trasmissione su una rete Wireless influisce in maniera fondamentale sulla qualità del collegamento tra due End-System. Lo scopo del progetto è quello di implementare una tecnica di ritrasmissione asimmetrica anticipata dei pacchetti perduti, in modo da minimizzare i tempi di recupero dati e migliorare la qualità della comunicazione. Partendo da uno studio su determinati tipi di ritrasmissione, in particolare quelli implementati dal progetto ABPS, Always Best Packet Switching, si è maturata l'idea che un tipo di ritrasmissione particolarmente utile potrebbe avvenire a livello Access Point: nel caso in cui la perdita di pacchetti avvenga tra l'AP e il nodo mobile che vi è collegato via IEEE802.11, invece che attendere la ritrasmissione TCP e Effettuata dall'End-System sorgente è lo stesso Access Point che e effettua una ritrasmissione verso il nodo mobile per permettere un veloce recupero dei dati perduti. Tale funzionalità stata quindi concettualmente divisa in due parti, la prima si riferisce all'applicazione che si occupa della bufferizzazione di pacchetti che attraversano l'AP e della loro copia in memoria per poi ritrasmetterli in caso di segnalazione di mancata acquisizione, la seconda riguardante la modifica al kernel che permette la segnalazione anticipata dell'errore. E' già stata sviluppata un'applicazione che prevede una ritrasmissione anticipata da parte dell'Access Point Wifi, cioè una ritrasmissione prima che la notifica di avvenuta perdita raggiunga l'end-point sorgente e appoggiata su un meccanismo di simulazione di Error Detection. Inoltre è stata anche realizzata la ritrasmissione asincrona e anticipata del TCP. Questo documento tratta della realizzazione di una nuova applicazione che fornisca una più effciente versione del buffer di pacchetti e utilizzi il meccanismo di una ritrasmissione asimmetrica e anticipata del TCP, cioè attivare la ritrasmissione su richiesta del TCP tramite notifiche di validità del campo Acknowledgement.
Resumo:
L'oggetto di studio di questa tesi e' l'analisi degli ammassi di galassie (galaxy clusters) e delle loro proprieta', attraverso un introduttiva analisi morfologica e dinamica, considerazioni sulle proprieta' termiche (con caratteristiche collegate direttamente dalla temperatura), ed infine l'ispezione dei meccanismi che generano le emissioni non termiche e le loro sorgenti. Cercheremo delle relazioni fra le une e le altre. In particolare studieremo specifiche conformazioni del mezzo intergalattico (ICM, intracluster medium) all'interno degli ammassi, quali Aloni, Relitti e Mini Aloni, attraverso le radiazioni che essi sprigionano nella banda dei raggi X e onde radio. Le prime osservazioni sugli ammassi di galassie sono state effettuate gia' alla fine del '700 da Charles Messier, che, al fine di esaminare il cielo alla ricerca di comete, forni un catalogo di 110 oggetti cosmici che, pur apparendo nebulosi per via della limitatezza di risoluzione dei telescopi di allora, non erano sicuramente comete. Fra questi oggetti vi erano anche ammassi di galassie. I primi studi approfonditi si ebbero soltanto con il rapido incremento tecnologico del XX secolo che permise di capire che quelle formazioni confuse altro non erano che agglomerati di galassie. Telescopi piu' grandi, e poi interferometri, radiotelescopi osservazioni agli X hanno sostanzialmente aperto il mondo dell'astrofisica. In particolare Abell stabili' nel primo dopoguerra il primo catalogo di ammassi su determinazione morfologica. Altri astronomi ampliarono poi i parametri di classificazione basandosi su caratteristiche ottiche e meccaniche. Le analisi piu' recenti infine basano le loro conclusioni sullo studio delle bande non ottiche dello spettro, principalmente i raggi X e onde Radio.
Resumo:
Il fotovoltaico (FV) costituisce il modo più diretto di conversione dell’energia solare in energia elettrica e si basa sull’effetto osservato da Becquerel nel 1839. Si può affermare che tale effetto è rimasto una curiosità di laboratorio dalla metà del XIX secolo fino al 1954, quando la prima cella solare in silicio con un’efficienza di conversione del 6% fu costruita ai Laboratori Bell. Da allora la ricerca in questo settore ha sperimentato una crescita costante ed oggi si può contare su tecnologie mature, in grado di sviluppare alte prestazioni in via di ulteriore miglioramento. Le celle tandem costituiscono ora il migliore esempio di dispositivi fotovoltaici in grado di convertire buona parte della potenza irraggiata dal sole. Aumentare l’efficienza con le celle tandem significa sfruttare le differenti lunghezze d’onda dello spettro solare. Questi dispositivi sono infatti costruiti impilando semiconduttori, disponendoli dal basso in modo crescente secondo i loro valori di energia di gap. A partire dall’analisi delle caratteristiche principali della radiazione solare e del principio di funzionamento delle celle fotovoltaiche, questo lavoro si propone di mettere in evidenza le potenzialità della tecnologia a multigiunzione, che ha già dimostrato rilevanti capacità di ottimizzazione delle prestazioni delle celle solari, facendo ben sperare per il futuro.
Resumo:
In questa tesi viene analizzato un problema di ottimizzazione proposto da alcuni esercizi commerciali che hanno la necessita` di selezionare e disporre i propri ar- ticoli in negozio. Il problema nasce dall’esigenza di massimizzare il profitto com- plessivo atteso dei prodotti in esposizione, trovando per ognuno una locazione sugli scaffali. I prodotti sono suddivisi in dipartimenti, dai quali solo un ele- mento deve essere selezionato ed esposto. In oltre si prevede la possibilita` di esprimere vincoli sulla locazione e compatibilita` dei prodotti. Il problema risul- tante `e una generalizzazione dei gia` noti Multiple-Choice Knapsack Problem e Multiple Knapsack Problem. Dopo una ricerca esaustiva in letteratura si `e ev- into che questo problema non `e ancora stato studiato. Si `e quindi provveduto a formalizzare il problema mediante un modello di programmazione lineare intera. Si propone un algoritmo esatto per la risoluzione del problema basato su column generation e branch and price. Sono stati formulati quattro modelli differenti per la risoluzione del pricing problem su cui si basa il column generation, per individuare quale sia il piu` efficiente. Tre dei quattro modelli proposti hanno performance comparabili, mentre l’ultimo si `e rivelato piu` inefficiente. Dai risul- tati ottenuti si evince che il metodo risolutivo proposto `e adatto a istanze di dimensione medio-bassa.
Resumo:
La diffusione e l'evoluzione degli smartphone hanno permesso una rapida espansione delle informazioni che e possibile raccogliere tramite i sensori dei dispositivi, per creare nuovi servizi per gli utenti o potenziare considerevolmente quelli gia esistenti, come ad esempio quelli di emergenza. In questo lavoro viene esplorata la capacita dei dispositivi mobili di fornire, tramite il calcolo dell'altitudine possibile grazie alla presenza del sensore barometrico all'interno di sempre piu dispositivi, il piano dell'edificio in cui si trova l'utente, attraverso l'analisi di varie metodologie con enfasi sulle problematiche dello stazionamento a lungo termine. Tra le metodologie vengono anche considerati sistemi aventi accesso ad una informazione proveniente da un dispositivo esterno e ad una loro versione corretta del problema dei differenti hardware relativi ai sensori. Inoltre viene proposto un algoritmo che, sulla base delle sole informazioni raccolte dal sensore barometrico interno, ha obbiettivo di limitare l'errore generato dalla naturale evoluzione della pressione atmosferica durante l'arco della giornata, distinguendo con buona precisione uno spostamento verticale quale un movimento tra piani, da un cambiamento dovuto ad agenti, quali quelli atmosferici, sulla pressione. I risultati ottenuti dalle metodologie e loro combinazioni analizzate vengono mostrati sia per singolo campionamento, permettendo di confrontare vantaggi e svantaggi dei singoli metodi in situazioni specifiche, sia aggregati in casi d'uso di possibili utenti aventi diverse necessita di stazionamento all'interno di un edificio.
Resumo:
The increasing usage of wireless networks creates new challenges for wireless access providers. On the one hand, providers want to satisfy the user demands but on the other hand, they try to reduce the operational costs by decreasing the energy consumption. In this paper, we evaluate the trade-off between energy efficiency and quality of experience for a wireless mesh testbed. The results show that by intelligent service control, resources can be better utilized and energy can be saved by reducing the number of active network components. However, care has to be taken because the channel bandwidth varies in wireless networks. In the second part of the paper, we analyze the trade-off between energy efficiency and quality of experience at the end user. The results reveal that a provider's service control measures do not only reduce the operational costs of the network but also bring a second benefit: they help maximize the battery lifetime of the end-user device.
Resumo:
Gebiet: Chirurgie Abstract: Background: Preservation of cardiac grafts for transplantation is not standardized and most centers use a single administration of crystalloid solution at the time of harvesting. We investigated possible benefits of an additional dose of cardioplegia dispensed immediately before implantation. – – Methods: Consecutive adult cardiac transplantations (2005?2012) were reviewed. Hearts were harvested following a standard protocol (Celsior 2L, 4?8°C). In 2008, 100 ml crys-talloid cardioplegic solution was added and administered immediately before implanta-tion. Univariate and logistic regression analyses were used to investigate risk factors for post-operative graft failure and mid-term outcome. – – Results: A total of 81 patients, 44 standard (?Cardio???) vs. 37 with additional cardiople-gia (?CardioC?) were analyzed. Recipients and donors were comparable in both groups. CardioC patients demonstrated a reduced need for defibrillation (24 vs. 48%, p D0.03), post-operative ratio of CK-MB/CK (10.1_3.9 vs. 13.3_4.2%, p D0.001), intubation time (2.0_1.6 vs. 7.2_11.5 days, p D0.05), and ICU stay (3.9_2.1 vs. 8.5_7.8 days, p D0.001). Actuarial survival was reduced when graft ischemic time was >180 min in Cardio?? but not in CardioC patients (p D0.033). Organ ischemic time >180 min (OR: 5.48, CI: 1.08?27.75), donor female gender (OR: 5.84, CI: 1.13?33.01), and recipient/donor age >60 (OR: 6.33, CI: 0.86?46.75), but not the additional cardioplegia or the observation period appeared independent predictors of post-operative acute graft failure. – – Conclusion: An additional dose of cardioplegia administered immediately before implan-tation may be a simple way to improve early and late outcome of cardiac transplantation, especially in situations of prolonged graft ischemia.A large, ideally multicentric, randomized study is desirable to verify this preliminary observation.
Resumo:
Study purpose. Genetic advances are significantly impacting healthcare, yet recent studies of ethnic group participation in genetic services demonstrate low utilization rates by Latinos. Limited genetic knowledge is a major barrier. The purpose of this study was to field test items in a Spanish-language instrument that will be used to measure genetic knowledge relevant to type 2 diabetes among members of the ethnically heterogeneous U.S. Latino community. Accurate genetic knowledge measurement can provide the foundation for interventions to enhance genetic service utilization. ^ Design. Three waves of cognitive interviews were conducted in Spanish to field test 44 instrument items Thirty-six Latinos, with 12 persons representative of Mexican, Central and South American, and Cuban heritage participated, including 7 males and 29 females between 22 and 60 years of age; 17 participants had 12 years or less of education. ^ Methods. Text narratives from transcriptions of audiotaped interviews were qualitatively analyzed using a coding strategy to indicate potential sources of response error. Through an iterative process of instrument refinement, codes that emerged from the data were used to guide item revisions at the conclusion of each phase; revised items were examined in subsequent interview waves. ^ Results. Inter-cultural and cross-cultural themes associated with difficulties in interpretation and grammatical structuring of items were identified; difficulties associated with comprehension reflected variations in educational level. Of the original 44 items, 32 were retained, 89% of which were revised. Six additional items reflective of cultural knowledge were constructed, resulting in a 38-item instrument. ^ Conclusions. Use of cognitive interviewing provided a valuable tool for detecting both potential sources of response error and cultural variations in these sources. Analysis of interview data guided successive instrument revisions leading to improved item interpretability and comprehension. Although testing in a larger sample will be essential to test validity and reliability, the outcome of field testing suggests initial content validity of a Spanish-language instrument to measure genetic knowledge relative to type 2 diabetes. ^ Keywords. Latinos, genetic knowledge, instrument development, cognitive interviewing ^
Resumo:
Background and aim. Hepatitis B virus (HBV) and hepatitis C virus (HCV) co-infection is associated with increased risk of cirrhosis, decompensation, hepatocellular carcinoma, and death. Yet, there is sparse epidemiologic data on co-infection in the United States. Therefore, the aim of this study was to determine the prevalence and determinants of HBV co-infection in a large United States population of HCV patients. ^ Methods. The National Veterans Affairs HCV Clinical Case Registry was used to identify patients tested for HCV during 1997–2005. HCV exposure was defined as two positive HCV tests (antibody, RNA or genotype) or one positive test combined with an ICD-9 code for HCV. HCV infection was defined as only a positive HCV RNA or genotype. HBV exposure was defined as a positive test for hepatitis B core antibodies, hepatitis B surface antigen, HBV DNA, hepatitis Be antigen, or hepatitis Be antibody. HBV infection was defined as only a positive test for hepatitis B surface antigen, HBV DNA, or hepatitis Be antigen within one year before or after the HCV index date. The prevalence of exposure to HBV in patients with HCV exposure and the prevalence of HBV infection in patients with HCV infection were determined. Multivariable logistic regression was used to identify demographic and clinical determinants of co-infection. ^ Results. Among 168,239 patients with HCV exposure, 58,415 patients had HBV exposure for a prevalence of 34.7% (95% CI 34.5–35.0). Among 102,971 patients with HCV infection, 1,431 patients had HBV co-infection for a prevalence of 1.4% (95% CI 1.3–1.5). The independent determinants for an increased risk of HBV co-infection were male sex, positive HIV status, a history of hemophilia, sickle cell anemia or thalassemia, history of blood transfusion, cocaine and other drug use. Age >50 years and Hispanic ethnicity were associated with a decreased risk of HBV co-infection. ^ Conclusions. This is the largest cohort study in the United States on the prevalence of HBV co-infection. Among veterans with HCV, exposure to HBV is common (∼35%), but HBV co-infection is relatively low (1.4%). There is an increased risk of co-infection with younger age, male sex, HIV, and drug use, with decreased risk in Hispanics.^
Resumo:
Time variable gravity fields, reflecting variations of mass distribution in the system Earth is one of the key parameters to understand the changing Earth. Mass variations are caused either by redistribution of mass in, on or above the Earth's surface or by geophysical processes in the Earth's interior. The first set of observations of monthly variations of the Earth gravity field was provided by the US/German GRACE satellite mission beginning in 2002. This mission is still providing valuable information to the science community. However, as GRACE has outlived its expected lifetime, the geoscience community is currently seeking successor missions in order to maintain the long time series of climate change that was begun by GRACE. Several studies on science requirements and technical feasibility have been conducted in the recent years. These studies required a realistic model of the time variable gravity field in order to perform simulation studies on sensitivity of satellites and their instrumentation. This was the primary reason for the European Space Agency (ESA) to initiate a study on ''Monitoring and Modelling individual Sources of Mass Distribution and Transport in the Earth System by Means of Satellites''. The goal of this interdisciplinary study was to create as realistic as possible simulated time variable gravity fields based on coupled geophysical models, which could be used in the simulation processes in a controlled environment. For this purpose global atmosphere, ocean, continental hydrology and ice models were used. The coupling was performed by using consistent forcing throughout the models and by including water flow between the different domains of the Earth system. In addition gravity field changes due to solid Earth processes like continuous glacial isostatic adjustment (GIA) and a sudden earthquake with co-seismic and post-seismic signals were modelled. All individual model results were combined and converted to gravity field spherical harmonic series, which is the quantity commonly used to describe the Earth's global gravity field. The result of this study is a twelve-year time-series of 6-hourly time variable gravity field spherical harmonics up to degree and order 180 corresponding to a global spatial resolution of 1 degree in latitude and longitude. In this paper, we outline the input data sets and the process of combining these data sets into a coherent model of temporal gravity field changes. The resulting time series was used in some follow-on studies and is available to anybody interested.
Resumo:
The timing of the most recent Neoglacial advance in the Antarctic Peninsula is important for establishing global climate teleconnections and providing important post-glacial rebound corrections to gravity-based satellite measurements of ice loss. However, obtaining accurate ages from terrestrial geomorphic and sedimentary indicators of the most recent Neoglacial advance in Antarctica has been hampered by the lack of historical records and the difficulty of dating materials in Antarctica. Here we use a new approach to dating flights of raised beaches in the South Shetland Islands of the northern Antarctic Peninsula to bracket the age of a Neoglacial advance that occurred between 1500 and 1700 AD, broadly synchronous with compilations for the timing of the Little Ice Age in the northern hemisphere. Our approach is based on optically stimulated luminescence of the underside of buried cobbles to obtain the age of beaches previously shown to have been deposited immediately inside and outside the moraines of the most recent Neoglacial advance. In addition, these beaches mark the timing of an apparent change in the rate of isostatic rebound thought to be in response to the same glacial advance within the South Shetland Islands. We use a Maxwell viscoelastic model of glacial-isostatic adjustment (GIA) to determine whether the rates of uplift calculated from the raised beaches are realistic given the limited constraints on the ice advance during this most recent Neoglacial advance. Our rebound model suggests that the subsequent melting of an additional 16-22% increase in the volume of ice within the South Shetland Islands would result in a subsequent uplift rate of 12.5 mm/yr that lasted until 1840 AD resulting in a cumulative uplift of 2.5 m. This uplift rate and magnitude are in close agreement with observed rates and magnitudes calculated from the raised beaches since the most recent Neoglacial advance along the South Shetland Islands and falls within the range of uplift rates from similar settings such as Alaska.
Resumo:
The traditional ballast track structures are still being used in high speed railways lines with success, however technical problems or performance features have led to non-ballast track solution in some cases. A considerable maintenance work is needed for ballasted tracks due to the track deterioration. Therefore it is very important to understand the mechanism of track deterioration and to predict the track settlement or track irregularity growth rate in order to reduce track maintenance costs and enable new track structures to be designed. The objective of this work is to develop the most adequate and efficient models for calculation of dynamic traffic load effects on railways track infrastructure, and then evaluate the dynamic effect on the ballast track settlement, using a ballast track settlement prediction model, which consists of the vehicle/track dynamic model previously selected and a track settlement law. The calculations are based on dynamic finite element models with direct time integration, contact between wheel and rail and interaction with railway cars. A initial irregularity profile is used in the prediction model. The track settlement law is considered to be a function of number of loading cycles and the magnitude of the loading, which represents the long-term behavior of ballast settlement. The results obtained include the track irregularity growth and the contact force in the final interaction of numerical simulation
Resumo:
This paper reports the studies carried out to develop and calibrate the optimal models for the objectives of this work. In particular, quarter bogie model for vehicle, rail-wheel contact with Lagrangian multiplier method, 2D spatial discretization were selected as the optimal decisions. Furthermore, the 3D model of coupled vehicle-track also has been developed to contrast the results obtained in the 2D model. The calculations were carried out in the time domain and envelopes of relevant results were obtained for several track profiles and speed ranges. Distributed elevation irregularities were generated based on power spectral density (PSD) distributions. The results obtained include the wheel-rail contact forces, forces transmitted to the bogie by primary suspension. The latter loads are relevant for the purpose of evaluating the performance of the infrastructure
Resumo:
The vertical dynamic actions transmitted by railway vehicles to the ballasted track infrastructure is evaluated taking into account models with different degree of detail. In particular, we have studied this matter from a two-dimensional (2D) finite element model to a fully coupled three-dimensional (3D) multi-body finite element model. The vehicle and track are coupled via a non-linear Hertz contact mechanism. The method of Lagrange multipliers is used for the contact constraint enforcement between wheel and rail. Distributed elevation irregularities are generated based on power spectral density (PSD) distributions which are taken into account for the interaction. The numerical simulations are performed in the time domain, using a direct integration method for solving the transient problem due to the contact nonlinearities. The results obtained include contact forces, forces transmitted to the infrastructure (sleeper) by railpads and envelopes of relevant results for several track irregularities and speed ranges. The main contribution of this work is to identify and discuss coincidences and differences between discrete 2D models and continuum 3D models, as wheel as assessing the validity of evaluating the dynamic loading on the track with simplified 2D models