903 resultados para Tests for Continuous Lifetime Data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to compare retrospectively the effect of three different treatments on the healing outcome of bisphosphonate-related osteonecrosis of the jaws (BRONJ) in cancer patients. Twenty-two cancer patients were treated for BRONJ with one of the following protocols: clinical (pharmacological therapy), surgical (pharmacological plus surgical therapy), or PRP plus LPT (pharmacological plus surgical plus platelet rich plasma (PRP) plus laser phototherapy (LPT). The laser treatment was applied with a continuous diode laser (InGaAlP, 660 nm) using punctual and contact mode, 40 mW, spot size 0.042 cm(2), 6 J/cm(2) (6 s) and total energy of 0.24 J per point. The irradiations were performed on the exposed bone and surrounding soft tissue. The analysis of demographic data and risk factors was performed by gathering the following information: age, gender, primary tumor, bisphosphonate (BP) used, duration of BP intake, history of chemotherapy, use of steroids, and medical history of diabetes. The association between the current state of BRONJ (with or without bone exposure) and other qualitative variables was determined using the chi-square or Fisher's exact test. In all tests, the significance level adopted was 5%. Most BRONJ lesions occurred in the mandible (77%) after tooth extraction (55%) and in women (72%). A significantly higher percentage of patients reached the current state of BRONJ without bone exposure (86%) in the PPR plus LPT group than in the pharmacological (0%) and surgical (40%) groups after 1-month follow-up assessment. These results suggest that the association of pharmacological therapy and surgical therapy with PRP plus LPT significantly improves BRONJ healing in oncologic patients. Although prospective studies with larger sample sizes are still needed, this preliminary study may be used to inform a better-designed future study. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A common interest in gene expression data analysis is to identify from a large pool of candidate genes the genes that present significant changes in expression levels between a treatment and a control biological condition. Usually, it is done using a statistic value and a cutoff value that are used to separate the genes differentially and nondifferentially expressed. In this paper, we propose a Bayesian approach to identify genes differentially expressed calculating sequentially credibility intervals from predictive densities which are constructed using the sampled mean treatment effect from all genes in study excluding the treatment effect of genes previously identified with statistical evidence for difference. We compare our Bayesian approach with the standard ones based on the use of the t-test and modified t-tests via a simulation study, using small sample sizes which are common in gene expression data analysis. Results obtained report evidence that the proposed approach performs better than standard ones, especially for cases with mean differences and increases in treatment variance in relation to control variance. We also apply the methodologies to a well-known publicly available data set on Escherichia coli bacterium.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The oil industry uses gas separators in production wells as the free gas present in the suction of the pump reduces the pumping efficiency and pump lifetime. Therefore, free gas is one of the most important variables in the design of pumping systems. However, in the literature there is little information on these separators. It is the case of the inverted-shroud gravitational gas separator. It has an annular geometry due to the installation of a cylindrical container in between the well casing and pioduction pipe (tubing). The purpose of the present study is to understand the phenomenology and behavior of inverted-shroud separator. Experimental tests were performed in a 10.5-m-length inclinable glass tube with air and water as working fluids. The water flow rate was in the range of 8.265-26.117 l/min and the average inlet air mass flow rate was 1.1041 kg/h, with inclination angles of 15 degrees, 30 degrees, 45 degrees, 60 degrees, 75 degrees, 80 degrees and 85 degrees. One of the findings is that the length between the inner annular level and production pipe inlet is one of the most important design parameters and based on that a new criterion for total gas separation is proposed. We also found that the phenomenology of the studied separator is not directly dependent on the gas flow rate, but on the average velocity of the free surface flow generated inside the separator. Maps of efficiency of gas separation were plotted and showed that liquid flow rate, inclination angle and pressure difference between casing and production pipe outlet are the main variables related to the gas separation phenomenon. The new data can be used for the development of design tools aiming to the optimized project of the pumping system for oil production in directional wells. (C) 2012 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We derive asymptotic expansions for the nonnull distribution functions of the likelihood ratio, Wald, score and gradient test statistics in the class of dispersion models, under a sequence of Pitman alternatives. The asymptotic distributions of these statistics are obtained for testing a subset of regression parameters and for testing the precision parameter. Based on these nonnull asymptotic expansions, the power of all four tests, which are equivalent to first order, are compared. Furthermore, in order to compare the finite-sample performance of these tests in this class of models, Monte Carlo simulations are presented. An empirical application to a real data set is considered for illustrative purposes. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of the present study was to investigate the association between the patellofemoral pain syndrome and the clinical static measurements: the rearfoot and the Q angles. The design was a cross-sectional, observational, case-control study. We evaluated 77 adults (both genders), 30 participants with patellofemoral pain syndrome, and 47 controls. We measured the rearfoot and Q angles by photogrammetry. Independent t-tests were used to compare outcome continuous measures between groups. Outcome continuous data were also transformed into categorical clinical classifications, in order to verify their statistical association with the dysfunction, and χ2 tests for multiple responses were used. There were no differences between groups for rearfoot angle [mean differences: 0.2º (95%CI -1.4-1.8)] and Q angle [mean differences: -0.3º (95%CI -3.0-2.4). No associations were found between increased rearfoot valgus [Odds Ratio: 1.29 (95%CI 0.51-3.25)], as well as increased Q angle [Odds Ratio: 0.77 (95%CI 0.31-1.93)] and the patellofemoral pain syndrome occurrence. Although widely used in clinical practice and theoretically thought, it cannot be affirmed that increased rearfoot valgus and increased Q angle, when statically measured in relaxed stance, are associated with patellofemoral pain syndrome (PFPS). These measures may have limited applicability in screening of the PFPS development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Eating disorder (ED) patients often have comorbidities with other psychiatric disorders, especially with mood disorders. Although recent studies suggest an intimate relationship between ED and bipolar disorder (BD), the study on a broader bipolar spectrum definition has not been done in this population. We aimed to study the occurrence of bipolar spectrum (BS) and comorbidities in eating disorder patients of a tertiary service provider. Methods Sixty-nine female patients diagnosed with anorexia nervosa, bulimia nervosa, or eating disorder not otherwise specified were evaluated. The assessment comprised the Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-I), clinical criteria for diagnosis of the Zurich bipolar spectrum. Mann–Whitney tests compared means of continuous variables. The association between categorical variables and the groups was described using contingency tables and analyzed using the chi-square or Fisher's exact test. The level of significance alpha was set at 5%. Results The results showed that 68.1% of patients had comorbidity with bipolar spectrum, and this was associated with higher family income, proportion of married people, and comorbidity with substance use. The ED with BS group showed higher rates of substance use comorbidity (40.4%) than the ED without BS group (13.6%). Discussion These results showed that the bipolar spectrum is a common comorbidity in patients with eating disorders and is associated with correlates of clinical importance, notably the comorbidity with substance use. Due to the pattern of similarity between the groups with and without comorbid bipolar spectrum in relation to various outcomes evaluated, the identification of comorbidity can be difficult. However, the precise diagnosis and careful identification of clinical correlates may contribute to future advances in treating these conditions. Further studies are necessary to evaluate the association of other clinical correlates and its possible causal association.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aims of this work are: (i) to produce new experimental data for fretting fatigue considering the presence of a mean bulk stress and (ii) to assess two design methodologies against failure by fretting fatigue. Tests on a cylinder–flat contact configuration were conducted using a fretting apparatus mounted on a servo-hydraulic machine. The material used for both the pads and fatigue specimen was an aeronautical 7050-T7451 Al alloy. The experimental program was designed with all relevant parameters, apart from the mean bulk load (always applied before the contact loads), kept constant. The mean bulk stress varied from compressive to tensile values while maintaining a high peak pressure in order to encourage crack initiation. Two methodologies against fretting fatigue are proposed and confronted against the experimental data. The non-local stress-based methodology considers the evaluation of a critical plane fatigue criterion at the center of a process zone located beneath the contacting surfaces. The results showed that it correctly predicts crack initiation, but was not capable to provide successful prediction of the integrity of the specimens. Alternatively, we considered a crack arrest criterion which has the potential to provide a more complete description about the integrity of the specimens.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the problems in the analysis of nucleus-nucleus collisions is to get information on the value of the impact parameter b. This work consists in the application of pattern recognition techniques aimed at associating values of b to groups of events. To this end, a support vec- tor machine (SVM) classifier is adopted to analyze multifragmentation reactions. This method allows to backtracing the values of b through a particular multidimensional analysis. The SVM classification con- sists of two main phase. In the first one, known as training phase, the classifier learns to discriminate the events that are generated by two different model:Classical Molecular Dynamics (CMD) and Heavy- Ion Phase-Space Exploration (HIPSE) for the reaction: 58Ni +48 Ca at 25 AMeV. To check the classification of events in the second one, known as test phase, what has been learned is tested on new events generated by the same models. These new results have been com- pared to the ones obtained through others techniques of backtracing the impact parameter. Our tests show that, following this approach, the central collisions and peripheral collisions, for the CMD events, are always better classified with respect to the classification by the others techniques of backtracing. We have finally performed the SVM classification on the experimental data measured by NUCL-EX col- laboration with CHIMERA apparatus for the previous reaction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Every seismic event produces seismic waves which travel throughout the Earth. Seismology is the science of interpreting measurements to derive information about the structure of the Earth. Seismic tomography is the most powerful tool for determination of 3D structure of deep Earth's interiors. Tomographic models obtained at the global and regional scales are an underlying tool for determination of geodynamical state of the Earth, showing evident correlation with other geophysical and geological characteristics. The global tomographic images of the Earth can be written as a linear combinations of basis functions from a specifically chosen set, defining the model parameterization. A number of different parameterizations are commonly seen in literature: seismic velocities in the Earth have been expressed, for example, as combinations of spherical harmonics or by means of the simpler characteristic functions of discrete cells. With this work we are interested to focus our attention on this aspect, evaluating a new type of parameterization, performed by means of wavelet functions. It is known from the classical Fourier theory that a signal can be expressed as the sum of a, possibly infinite, series of sines and cosines. This sum is often referred as a Fourier expansion. The big disadvantage of a Fourier expansion is that it has only frequency resolution and no time resolution. The Wavelet Analysis (or Wavelet Transform) is probably the most recent solution to overcome the shortcomings of Fourier analysis. The fundamental idea behind this innovative analysis is to study signal according to scale. Wavelets, in fact, are mathematical functions that cut up data into different frequency components, and then study each component with resolution matched to its scale, so they are especially useful in the analysis of non stationary process that contains multi-scale features, discontinuities and sharp strike. Wavelets are essentially used in two ways when they are applied in geophysical process or signals studies: 1) as a basis for representation or characterization of process; 2) as an integration kernel for analysis to extract information about the process. These two types of applications of wavelets in geophysical field, are object of study of this work. At the beginning we use the wavelets as basis to represent and resolve the Tomographic Inverse Problem. After a briefly introduction to seismic tomography theory, we assess the power of wavelet analysis in the representation of two different type of synthetic models; then we apply it to real data, obtaining surface wave phase velocity maps and evaluating its abilities by means of comparison with an other type of parametrization (i.e., block parametrization). For the second type of wavelet application we analyze the ability of Continuous Wavelet Transform in the spectral analysis, starting again with some synthetic tests to evaluate its sensibility and capability and then apply the same analysis to real data to obtain Local Correlation Maps between different model at same depth or between different profiles of the same model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Composite porcelain enamels are inorganic coatings for metallic components based on a special ceramic-vitreous matrix in which specific additives are randomly dispersed. The ceramic-vitreous matrix is made by a mixture of various raw materials and elements and in particular it is based on boron-silicate glass added with metal oxides(1) of titanium, zinc, tin, zirconia, alumina, ecc. These additions are often used to improve and enhance some important performances such as corrosion(2) and wear resistance, mechanical strength, fracture toughness and also aesthetic functions. The coating process, called enamelling, depends on the nature of the surface, but also on the kind of the used porcelain enamel. For metal sheets coatings two industrial processes are actually used: one based on a wet porcelain enamel and another based on a dry-silicone porcelain enamel. During the firing process, that is performed at about 870°C in the case of a steel substrate, the enamel raw material melts and interacts with the metal substrate so enabling the formation of a continuous varying structure. The interface domain between the substrate and the external layer is made of a complex material system where the ceramic vitreous and the metal constituents are mixed. In particular four main regions can be identified, (i) the pure metal region, (ii) the region where the metal constituents are dominant compared with the ceramic vitreous components, (iii) the region where the ceramic vitreous constituents are dominant compared with the metal ones, and the fourth region (iv) composed by the pure ceramic vitreous material. It has also to be noticed the presence of metallic dendrites that hinder the substrate and the external layer passing through the interphase region. Each region of the final composite structure plays a specific role: the metal substrate has mainly the structural function, the interphase region and the embedded dendrites guarantee the adhesion of the external vitreous layer to the substrate and the external vitreous layer is characterized by an high tribological, corrosion and thermal shock resistance. Such material, due to its internal composition, functionalization and architecture can be considered as a functionally graded composite material. The knowledge of the mechanical, tribological and chemical behavior of such composites is not well established and the research is still in progress. In particular the mechanical performances data about the composite coating are not jet established. In the present work the Residual Stresses, the Young modulus and the First Crack Failure of the composite porcelain enamel coating are studied. Due to the differences of the porcelain composite enamel and steel thermal properties the enamelled steel sheets have residual stresses: compressive residual stress acts on the coating and tensile residual stress acts on the steel sheet. The residual stresses estimation has been performed by measuring the curvature of rectangular one-side coated specimens. The Young modulus and the First Crack Failure (FCF) of the coating have been estimated by four point bending tests (3-7) monitored by means of the Acoustic Emission (AE) technique(5,6). In particular the AE information has been used to identify, during the bending tests, the displacement domain over which no coating failure occurs (Free Failure Zone, FFZ). In the FFZ domain, the Young modulus has been estimated according to ASTM D6272-02. The FCF has been calculated as the ratio between the displacement at the first crack of the coating and the coating thickness on the cracked side. The mechanical performances of the tested coated specimens have also been related and discussed to respective microstructure and surface characteristics by double entry charts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Se il lavoro dello storico è capire il passato come è stato compreso dalla gente che lo ha vissuto, allora forse non è azzardato pensare che sia anche necessario comunicare i risultati delle ricerche con strumenti propri che appartengono a un'epoca e che influenzano la mentalità di chi in quell'epoca vive. Emergenti tecnologie, specialmente nell’area della multimedialità come la realtà virtuale, permettono agli storici di comunicare l’esperienza del passato in più sensi. In che modo la storia collabora con le tecnologie informatiche soffermandosi sulla possibilità di fare ricostruzioni storiche virtuali, con relativi esempi e recensioni? Quello che maggiormente preoccupa gli storici è se una ricostruzione di un fatto passato vissuto attraverso la sua ricreazione in pixels sia un metodo di conoscenza della storia che possa essere considerato valido. Ovvero l'emozione che la navigazione in una realtà 3D può suscitare, è un mezzo in grado di trasmettere conoscenza? O forse l'idea che abbiamo del passato e del suo studio viene sottilmente cambiato nel momento in cui lo si divulga attraverso la grafica 3D? Da tempo però la disciplina ha cominciato a fare i conti con questa situazione, costretta soprattutto dall'invasività di questo tipo di media, dalla spettacolarizzazione del passato e da una divulgazione del passato parziale e antiscientifica. In un mondo post letterario bisogna cominciare a pensare che la cultura visuale nella quale siamo immersi sta cambiando il nostro rapporto con il passato: non per questo le conoscenze maturate fino ad oggi sono false, ma è necessario riconoscere che esiste più di una verità storica, a volte scritta a volte visuale. Il computer è diventato una piattaforma onnipresente per la rappresentazione e diffusione dell’informazione. I metodi di interazione e rappresentazione stanno evolvendo di continuo. Ed è su questi due binari che è si muove l’offerta delle tecnologie informatiche al servizio della storia. Lo scopo di questa tesi è proprio quello di esplorare, attraverso l’utilizzo e la sperimentazione di diversi strumenti e tecnologie informatiche, come si può raccontare efficacemente il passato attraverso oggetti tridimensionali e gli ambienti virtuali, e come, nel loro essere elementi caratterizzanti di comunicazione, in che modo possono collaborare, in questo caso particolare, con la disciplina storica. La presente ricerca ricostruisce alcune linee di storia delle principali fabbriche attive a Torino durante la seconda guerra mondiale, ricordando stretta relazione che esiste tra strutture ed individui e in questa città in particolare tra fabbrica e movimento operaio, è inevitabile addentrarsi nelle vicende del movimento operaio torinese che nel periodo della lotta di Liberazione in città fu un soggetto politico e sociale di primo rilievo. Nella città, intesa come entità biologica coinvolta nella guerra, la fabbrica (o le fabbriche) diventa il nucleo concettuale attraverso il quale leggere la città: sono le fabbriche gli obiettivi principali dei bombardamenti ed è nelle fabbriche che si combatte una guerra di liberazione tra classe operaia e autorità, di fabbrica e cittadine. La fabbrica diventa il luogo di "usurpazione del potere" di cui parla Weber, il palcoscenico in cui si tengono i diversi episodi della guerra: scioperi, deportazioni, occupazioni .... Il modello della città qui rappresentata non è una semplice visualizzazione ma un sistema informativo dove la realtà modellata è rappresentata da oggetti, che fanno da teatro allo svolgimento di avvenimenti con una precisa collocazione cronologica, al cui interno è possibile effettuare operazioni di selezione di render statici (immagini), di filmati precalcolati (animazioni) e di scenari navigabili interattivamente oltre ad attività di ricerca di fonti bibliografiche e commenti di studiosi segnatamente legati all'evento in oggetto. Obiettivo di questo lavoro è far interagire, attraverso diversi progetti, le discipline storiche e l’informatica, nelle diverse opportunità tecnologiche che questa presenta. Le possibilità di ricostruzione offerte dal 3D vengono così messe a servizio della ricerca, offrendo una visione integrale in grado di avvicinarci alla realtà dell’epoca presa in considerazione e convogliando in un’unica piattaforma espositiva tutti i risultati. Divulgazione Progetto Mappa Informativa Multimediale Torino 1945 Sul piano pratico il progetto prevede una interfaccia navigabile (tecnologia Flash) che rappresenti la pianta della città dell’epoca, attraverso la quale sia possibile avere una visione dei luoghi e dei tempi in cui la Liberazione prese forma, sia a livello concettuale, sia a livello pratico. Questo intreccio di coordinate nello spazio e nel tempo non solo migliora la comprensione dei fenomeni, ma crea un maggiore interesse sull’argomento attraverso l’utilizzo di strumenti divulgativi di grande efficacia (e appeal) senza perdere di vista la necessità di valicare le tesi storiche proponendosi come piattaforma didattica. Un tale contesto richiede uno studio approfondito degli eventi storici al fine di ricostruire con chiarezza una mappa della città che sia precisa sia topograficamente sia a livello di navigazione multimediale. La preparazione della cartina deve seguire gli standard del momento, perciò le soluzioni informatiche utilizzate sono quelle fornite da Adobe Illustrator per la realizzazione della topografia, e da Macromedia Flash per la creazione di un’interfaccia di navigazione. La base dei dati descrittivi è ovviamente consultabile essendo contenuta nel supporto media e totalmente annotata nella bibliografia. È il continuo evolvere delle tecnologie d'informazione e la massiccia diffusione dell’uso dei computer che ci porta a un cambiamento sostanziale nello studio e nell’apprendimento storico; le strutture accademiche e gli operatori economici hanno fatto propria la richiesta che giunge dall'utenza (insegnanti, studenti, operatori dei Beni Culturali) di una maggiore diffusione della conoscenza storica attraverso la sua rappresentazione informatizzata. Sul fronte didattico la ricostruzione di una realtà storica attraverso strumenti informatici consente anche ai non-storici di toccare con mano quelle che sono le problematiche della ricerca quali fonti mancanti, buchi della cronologia e valutazione della veridicità dei fatti attraverso prove. Le tecnologie informatiche permettono una visione completa, unitaria ed esauriente del passato, convogliando tutte le informazioni su un'unica piattaforma, permettendo anche a chi non è specializzato di comprendere immediatamente di cosa si parla. Il miglior libro di storia, per sua natura, non può farlo in quanto divide e organizza le notizie in modo diverso. In questo modo agli studenti viene data l'opportunità di apprendere tramite una rappresentazione diversa rispetto a quelle a cui sono abituati. La premessa centrale del progetto è che i risultati nell'apprendimento degli studenti possono essere migliorati se un concetto o un contenuto viene comunicato attraverso più canali di espressione, nel nostro caso attraverso un testo, immagini e un oggetto multimediale. Didattica La Conceria Fiorio è uno dei luoghi-simbolo della Resistenza torinese. Il progetto è una ricostruzione in realtà virtuale della Conceria Fiorio di Torino. La ricostruzione serve a arricchire la cultura storica sia a chi la produce, attraverso una ricerca accurata delle fonti, sia a chi può poi usufruirne, soprattutto i giovani, che, attratti dall’aspetto ludico della ricostruzione, apprendono con più facilità. La costruzione di un manufatto in 3D fornisce agli studenti le basi per riconoscere ed esprimere la giusta relazione fra il modello e l’oggetto storico. Le fasi di lavoro attraverso cui si è giunti alla ricostruzione in 3D della Conceria: . una ricerca storica approfondita, basata sulle fonti, che possono essere documenti degli archivi o scavi archeologici, fonti iconografiche, cartografiche, ecc.; . La modellazione degli edifici sulla base delle ricerche storiche, per fornire la struttura geometrica poligonale che permetta la navigazione tridimensionale; . La realizzazione, attraverso gli strumenti della computer graphic della navigazione in 3D. Unreal Technology è il nome dato al motore grafico utilizzato in numerosi videogiochi commerciali. Una delle caratteristiche fondamentali di tale prodotto è quella di avere uno strumento chiamato Unreal editor con cui è possibile costruire mondi virtuali, e che è quello utilizzato per questo progetto. UnrealEd (Ued) è il software per creare livelli per Unreal e i giochi basati sul motore di Unreal. E’ stata utilizzata la versione gratuita dell’editor. Il risultato finale del progetto è un ambiente virtuale navigabile raffigurante una ricostruzione accurata della Conceria Fiorio ai tempi della Resistenza. L’utente può visitare l’edificio e visualizzare informazioni specifiche su alcuni punti di interesse. La navigazione viene effettuata in prima persona, un processo di “spettacolarizzazione” degli ambienti visitati attraverso un arredamento consono permette all'utente una maggiore immersività rendendo l’ambiente più credibile e immediatamente codificabile. L’architettura Unreal Technology ha permesso di ottenere un buon risultato in un tempo brevissimo, senza che fossero necessari interventi di programmazione. Questo motore è, quindi, particolarmente adatto alla realizzazione rapida di prototipi di una discreta qualità, La presenza di un certo numero di bug lo rende, però, in parte inaffidabile. Utilizzare un editor da videogame per questa ricostruzione auspica la possibilità di un suo impiego nella didattica, quello che le simulazioni in 3D permettono nel caso specifico è di permettere agli studenti di sperimentare il lavoro della ricostruzione storica, con tutti i problemi che lo storico deve affrontare nel ricreare il passato. Questo lavoro vuole essere per gli storici una esperienza nella direzione della creazione di un repertorio espressivo più ampio, che includa gli ambienti tridimensionali. Il rischio di impiegare del tempo per imparare come funziona questa tecnologia per generare spazi virtuali rende scettici quanti si impegnano nell'insegnamento, ma le esperienze di progetti sviluppati, soprattutto all’estero, servono a capire che sono un buon investimento. Il fatto che una software house, che crea un videogame di grande successo di pubblico, includa nel suo prodotto, una serie di strumenti che consentano all'utente la creazione di mondi propri in cui giocare, è sintomatico che l'alfabetizzazione informatica degli utenti medi sta crescendo sempre più rapidamente e che l'utilizzo di un editor come Unreal Engine sarà in futuro una attività alla portata di un pubblico sempre più vasto. Questo ci mette nelle condizioni di progettare moduli di insegnamento più immersivi, in cui l'esperienza della ricerca e della ricostruzione del passato si intreccino con lo studio più tradizionale degli avvenimenti di una certa epoca. I mondi virtuali interattivi vengono spesso definiti come la forma culturale chiave del XXI secolo, come il cinema lo è stato per il XX. Lo scopo di questo lavoro è stato quello di suggerire che vi sono grosse opportunità per gli storici impiegando gli oggetti e le ambientazioni in 3D, e che essi devono coglierle. Si consideri il fatto che l’estetica abbia un effetto sull’epistemologia. O almeno sulla forma che i risultati delle ricerche storiche assumono nel momento in cui devono essere diffuse. Un’analisi storica fatta in maniera superficiale o con presupposti errati può comunque essere diffusa e avere credito in numerosi ambienti se diffusa con mezzi accattivanti e moderni. Ecco perchè non conviene seppellire un buon lavoro in qualche biblioteca, in attesa che qualcuno lo scopra. Ecco perchè gli storici non devono ignorare il 3D. La nostra capacità, come studiosi e studenti, di percepire idee ed orientamenti importanti dipende spesso dai metodi che impieghiamo per rappresentare i dati e l’evidenza. Perché gli storici possano ottenere il beneficio che il 3D porta con sè, tuttavia, devono sviluppare un’agenda di ricerca volta ad accertarsi che il 3D sostenga i loro obiettivi di ricercatori e insegnanti. Una ricostruzione storica può essere molto utile dal punto di vista educativo non sono da chi la visita ma, anche da chi la realizza. La fase di ricerca necessaria per la ricostruzione non può fare altro che aumentare il background culturale dello sviluppatore. Conclusioni La cosa più importante è stata la possibilità di fare esperienze nell’uso di mezzi di comunicazione di questo genere per raccontare e far conoscere il passato. Rovesciando il paradigma conoscitivo che avevo appreso negli studi umanistici, ho cercato di desumere quelle che potremo chiamare “leggi universali” dai dati oggettivi emersi da questi esperimenti. Da punto di vista epistemologico l’informatica, con la sua capacità di gestire masse impressionanti di dati, dà agli studiosi la possibilità di formulare delle ipotesi e poi accertarle o smentirle tramite ricostruzioni e simulazioni. Il mio lavoro è andato in questa direzione, cercando conoscere e usare strumenti attuali che nel futuro avranno sempre maggiore presenza nella comunicazione (anche scientifica) e che sono i mezzi di comunicazione d’eccellenza per determinate fasce d’età (adolescenti). Volendo spingere all’estremo i termini possiamo dire che la sfida che oggi la cultura visuale pone ai metodi tradizionali del fare storia è la stessa che Erodoto e Tucidide contrapposero ai narratori di miti e leggende. Prima di Erodoto esisteva il mito, che era un mezzo perfettamente adeguato per raccontare e dare significato al passato di una tribù o di una città. In un mondo post letterario la nostra conoscenza del passato sta sottilmente mutando nel momento in cui lo vediamo rappresentato da pixel o quando le informazioni scaturiscono non da sole, ma grazie all’interattività con il mezzo. La nostra capacità come studiosi e studenti di percepire idee ed orientamenti importanti dipende spesso dai metodi che impieghiamo per rappresentare i dati e l’evidenza. Perché gli storici possano ottenere il beneficio sottinteso al 3D, tuttavia, devono sviluppare un’agenda di ricerca volta ad accertarsi che il 3D sostenga i loro obiettivi di ricercatori e insegnanti. Le esperienze raccolte nelle pagine precedenti ci portano a pensare che in un futuro non troppo lontano uno strumento come il computer sarà l’unico mezzo attraverso cui trasmettere conoscenze, e dal punto di vista didattico la sua interattività consente coinvolgimento negli studenti come nessun altro mezzo di comunicazione moderno.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The miniaturization race in the hardware industry aiming at continuous increasing of transistor density on a die does not bring respective application performance improvements any more. One of the most promising alternatives is to exploit a heterogeneous nature of common applications in hardware. Supported by reconfigurable computation, which has already proved its efficiency in accelerating data intensive applications, this concept promises a breakthrough in contemporary technology development. Memory organization in such heterogeneous reconfigurable architectures becomes very critical. Two primary aspects introduce a sophisticated trade-off. On the one hand, a memory subsystem should provide well organized distributed data structure and guarantee the required data bandwidth. On the other hand, it should hide the heterogeneous hardware structure from the end-user, in order to support feasible high-level programmability of the system. This thesis work explores the heterogeneous reconfigurable hardware architectures and presents possible solutions to cope the problem of memory organization and data structure. By the example of the MORPHEUS heterogeneous platform, the discussion follows the complete design cycle, starting from decision making and justification, until hardware realization. Particular emphasis is made on the methods to support high system performance, meet application requirements, and provide a user-friendly programmer interface. As a result, the research introduces a complete heterogeneous platform enhanced with a hierarchical memory organization, which copes with its task by means of separating computation from communication, providing reconfigurable engines with computation and configuration data, and unification of heterogeneous computational devices using local storage buffers. It is distinguished from the related solutions by distributed data-flow organization, specifically engineered mechanisms to operate with data on local domains, particular communication infrastructure based on Network-on-Chip, and thorough methods to prevent computation and communication stalls. In addition, a novel advanced technique to accelerate memory access was developed and implemented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work of this thesis has been focused on the characterization of metallic membranes for the hydrogen purification from steam reforming process and also of perfluorosulphonic acid ionomeric (PFSI) membranes suitable as electrolytes in fuel cell applications. The experimental study of metallic membranes was divided in three sections: synthesis of palladium and silver palladium coatings on porous ceramic support via electroless deposition (ELD), solubility and diffusivity analysis of hydrogen in palladium based alloys (temperature range between 200 and 400 °C up to 12 bar of pressure) and permeation experiments of pure hydrogen and mixtures containing, besides hydrogen, also nitrogen and methane at high temperatures (up to 600 °C) and pressures (up to 10 bar). Sequential deposition of palladium and silver on to porous alumina tubes by ELD technique was carried out using two different procedures: a stirred batch and a continuous flux method. Pure palladium as well as Pd-Ag membranes were produced: the Pd-Ag membranes’ composition is calculated to be close to 77% Pd and 23% Ag by weight which was the target value that correspond to the best performance of the palladium-based alloys. One of the membranes produced showed an infinite selectivity through hydrogen and relatively high permeability value and is suitable for the potential use as a hydrogen separator. The hydrogen sorption in silver palladium alloys was carried out in a gravimetric system on films produced by ELD technique. In the temperature range inspected, up to 400°C, there is still a lack in literature. The experimental data were analyzed with rigorous equations allowing to calculate the enthalpy and entropy values of the Sieverts’ constant; the results were in very good agreement with the extrapolation made with literature data obtained a lower temperature (up to 150 °C). The information obtained in this study would be directly usable in the modeling of hydrogen permeation in Pd-based systems. Pure and mixed gas permeation tests were performed on Pd-based hydrogen selective membranes at operative conditions close to steam-reforming ones. Two membranes (one produced in this work and another produced by NGK Insulators Japan) showed a virtually infinite selectivity and good permeability. Mixture data revealed the existence of non negligible resistances to hydrogen transport in the gas phase. Even if the decrease of the driving force due to polarization concentration phenomena occurs, in principle, in all membrane-based separation systems endowed with high perm-selectivity, an extensive experimental analysis lack, at the moment, in the palladium-based membrane process in literature. Moreover a new procedure has been introduced for the proper comparison of the mass transport resistance in the gas phase and in the membrane. Another object of study was the water vapor sorption and permeation in PFSI membranes with short and long side chains was also studied; moreover the permeation of gases (i.e. He, N2 and O2) in dry and humid conditions was considered. The water vapor sorption showed strong interactions between the hydrophilic groups and the water as revealed from the hysteresis in the sorption-desorption isotherms and thermo gravimetric analysis. The data obtained were used in the modeling of water vapor permeation, that was described as diffusion-reaction of water molecules, and in the humid gases permeation experiments. In the dry gas experiments the permeability and diffusivity was found to increase with temperature and with the equivalent weight (EW) of the membrane. A linear correlation was drawn between the dry gas permeability and the opposite of the equivalent weight of PFSI membranes, based on which the permeability of pure PTFE is retrieved in the limit of high EW. In the other hand O2 ,N2 and He permeability values was found to increase significantly, and in a similar fashion, with water activity. A model that considers the PFSI membrane as a composite matrix with a hydrophilic and a hydrophobic phase was considered allowing to estimate the variation of gas permeability with relative humidity on the basis of the permeability in the dry PFSI membrane and in pure liquid water.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study new tomographic models of Colombia were calculated. I used the seismicity recorded by the Colombian seismic network during the period 2006-2009. In this time period, the improvement of the seismic network yields more stable hypocentral results with respect to older data set and allows to compute new 3D Vp and Vp/Vs models. The final dataset consists of 10813 P- and 8614 S-arrival times associated to 1405 earthquakes. Tests with synthetic data and resolution analysis indicate that velocity models are well constrained in central, western and southwestern Colombia to a depth of 160 km; the resolution is poor in the northern Colombia and close to Venezuela due to a lack of seismic stations and seismicity. The tomographic models and the relocated seismicity indicate the existence of E-SE subducting Nazca lithosphere beneath central and southern Colombia. The North-South changes in Wadati-Benioff zone, Vp & Vp/Vs pattern and volcanism, show that the downgoing plate is segmented by slab tears E-W directed, suggesting the presence of three sectors. Earthquakes in the northernmost sector represent most of the Colombian seimicity and concentrated on 100-170 km depth interval, beneath the Eastern Cordillera. Here a massive dehydration is inferred, resulting from a delay in the eclogitization of a thickened oceanic crust in a flat-subduction geometry. In this sector a cluster of intermediate-depth seismicity (Bucaramanga Nest) is present beneath the elbow of the Eastern Cordillera, interpreted as the result of massive and highly localized dehydration phenomenon caused by a hyper-hydrous oceanic crust. The central and southern sectors, although different in Vp pattern show, conversely, a continuous, steep and more homogeneous Wadati-Benioff zone with overlying volcanic areas. Here a "normalthickened" oceanic crust is inferred, allowing for a gradual and continuous metamorphic reactions to take place with depth, enabling the fluid migration towards the mantle wedge.