943 resultados para non-traditional students,


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Epstein-Barr virus (EBV) is associated with a large spectrum of lymphoproliferative diseases. Traditional methods of EBV detection include the immunohistochemical identification of viral proteins and DNA probes to the viral genome in tumoral tissue. The present study explored the detection of the EBV genome, using the BALF5 gene, in the bone marrow or blood mononuclear cells of patients with diffuse large B-cell lymphomas (DLBCL) and related its presence to the clinical variables and risk factors. The results show that EBV detection in 21.5% of patients is not associated with age, gender, staging, B symptoms, international prognostic index scores or any analytical parameters, including lactate dehydrogenase (LDH) or beta-2 microglobulin (B2M). The majority of patients were treated with R-CHOP-like (rituximab. cyclophosphamide, doxorubicin, vincristine and prednisolone or an equivalent combination) and some with CHOP-like chemotherapy. Response rates [complete response (CR) + partial response (PR)] were not significantly different between EBV-negative and -positive cases, with 93.2 and 88.9%, respectively. The survival rate was also similar in the two groups, with 5-year overall survival (OS) rates of 64.3 and 76.7%, respectively. However, when analyzing the treatment groups separately there was a trend in EBV-positive patients for a worse prognosis in patients treated with CHOP-like regimens that was not identified in patients treated with R-CHOP-like regimens. We conclude that EBV detection in the bone marrow and blood mononuclear cells of DLBC patients has the same frequency of EBV detection on tumoral lymphoma tissue but is not associated with the risk factors, response rate and survival in patients treated mainly with immunochemotherapy plus rituximab. These results also suggest that the addition of rituximab to chemotherapy improves the prognosis associated with EBV detection in DLBCL.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The wide use of e-technologies represents a great opportunity for underserved segments of the population, especially with the aim of reintegrating excluded individuals back into society through education. This is particularly true for people with different types of disabilities who may have difficulties while attending traditional on-site learning programs that are typically based on printed learning resources. The creation and provision of accessible e-learning contents may therefore become a key factor in enabling people with different access needs to enjoy quality learning experiences and services. Another e-learning challenge is represented by m-learning (which stands for mobile learning), which is emerging as a consequence of mobile terminals diffusion and provides the opportunity to browse didactical materials everywhere, outside places that are traditionally devoted to education. Both such situations share the need to access materials in limited conditions and collide with the growing use of rich media in didactical contents, which are designed to be enjoyed without any restriction. Nowadays, Web-based teaching makes great use of multimedia technologies, ranging from Flash animations to prerecorded video-lectures. Rich media in e-learning can offer significant potential in enhancing the learning environment, through helping to increase access to education, enhance the learning experience and support multiple learning styles. Moreover, they can often be used to improve the structure of Web-based courses. These highly variegated and structured contents may significantly improve the quality and the effectiveness of educational activities for learners. For example, rich media contents allow us to describe complex concepts and process flows. Audio and video elements may be utilized to add a “human touch” to distance-learning courses. Finally, real lectures may be recorded and distributed to integrate or enrich on line materials. A confirmation of the advantages of these approaches can be seen in the exponential growth of video-lecture availability on the net, due to the ease of recording and delivering activities which take place in a traditional classroom. Furthermore, the wide use of assistive technologies for learners with disabilities injects new life into e-learning systems. E-learning allows distance and flexible educational activities, thus helping disabled learners to access resources which would otherwise present significant barriers for them. For instance, students with visual impairments have difficulties in reading traditional visual materials, deaf learners have trouble in following traditional (spoken) lectures, people with motion disabilities have problems in attending on-site programs. As already mentioned, the use of wireless technologies and pervasive computing may really enhance the educational learner experience by offering mobile e-learning services that can be accessed by handheld devices. This new paradigm of educational content distribution maximizes the benefits for learners since it enables users to overcome constraints imposed by the surrounding environment. While certainly helpful for users without disabilities, we believe that the use of newmobile technologies may also become a fundamental tool for impaired learners, since it frees them from sitting in front of a PC. In this way, educational activities can be enjoyed by all the users, without hindrance, thus increasing the social inclusion of non-typical learners. While the provision of fully accessible and portable video-lectures may be extremely useful for students, it is widely recognized that structuring and managing rich media contents for mobile learning services are complex and expensive tasks. Indeed, major difficulties originate from the basic need to provide a textual equivalent for each media resource composing a rich media Learning Object (LO). Moreover, tests need to be carried out to establish whether a given LO is fully accessible to all kinds of learners. Unfortunately, both these tasks are truly time-consuming processes, depending on the type of contents the teacher is writing and on the authoring tool he/she is using. Due to these difficulties, online LOs are often distributed as partially accessible or totally inaccessible content. Bearing this in mind, this thesis aims to discuss the key issues of a system we have developed to deliver accessible, customized or nomadic learning experiences to learners with different access needs and skills. To reduce the risk of excluding users with particular access capabilities, our system exploits Learning Objects (LOs) which are dynamically adapted and transcoded based on the specific needs of non-typical users and on the barriers that they can encounter in the environment. The basic idea is to dynamically adapt contents, by selecting them from a set of media resources packaged in SCORM-compliant LOs and stored in a self-adapting format. The system schedules and orchestrates a set of transcoding processes based on specific learner needs, so as to produce a customized LO that can be fully enjoyed by any (impaired or mobile) student.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Se il lavoro dello storico è capire il passato come è stato compreso dalla gente che lo ha vissuto, allora forse non è azzardato pensare che sia anche necessario comunicare i risultati delle ricerche con strumenti propri che appartengono a un'epoca e che influenzano la mentalità di chi in quell'epoca vive. Emergenti tecnologie, specialmente nell’area della multimedialità come la realtà virtuale, permettono agli storici di comunicare l’esperienza del passato in più sensi. In che modo la storia collabora con le tecnologie informatiche soffermandosi sulla possibilità di fare ricostruzioni storiche virtuali, con relativi esempi e recensioni? Quello che maggiormente preoccupa gli storici è se una ricostruzione di un fatto passato vissuto attraverso la sua ricreazione in pixels sia un metodo di conoscenza della storia che possa essere considerato valido. Ovvero l'emozione che la navigazione in una realtà 3D può suscitare, è un mezzo in grado di trasmettere conoscenza? O forse l'idea che abbiamo del passato e del suo studio viene sottilmente cambiato nel momento in cui lo si divulga attraverso la grafica 3D? Da tempo però la disciplina ha cominciato a fare i conti con questa situazione, costretta soprattutto dall'invasività di questo tipo di media, dalla spettacolarizzazione del passato e da una divulgazione del passato parziale e antiscientifica. In un mondo post letterario bisogna cominciare a pensare che la cultura visuale nella quale siamo immersi sta cambiando il nostro rapporto con il passato: non per questo le conoscenze maturate fino ad oggi sono false, ma è necessario riconoscere che esiste più di una verità storica, a volte scritta a volte visuale. Il computer è diventato una piattaforma onnipresente per la rappresentazione e diffusione dell’informazione. I metodi di interazione e rappresentazione stanno evolvendo di continuo. Ed è su questi due binari che è si muove l’offerta delle tecnologie informatiche al servizio della storia. Lo scopo di questa tesi è proprio quello di esplorare, attraverso l’utilizzo e la sperimentazione di diversi strumenti e tecnologie informatiche, come si può raccontare efficacemente il passato attraverso oggetti tridimensionali e gli ambienti virtuali, e come, nel loro essere elementi caratterizzanti di comunicazione, in che modo possono collaborare, in questo caso particolare, con la disciplina storica. La presente ricerca ricostruisce alcune linee di storia delle principali fabbriche attive a Torino durante la seconda guerra mondiale, ricordando stretta relazione che esiste tra strutture ed individui e in questa città in particolare tra fabbrica e movimento operaio, è inevitabile addentrarsi nelle vicende del movimento operaio torinese che nel periodo della lotta di Liberazione in città fu un soggetto politico e sociale di primo rilievo. Nella città, intesa come entità biologica coinvolta nella guerra, la fabbrica (o le fabbriche) diventa il nucleo concettuale attraverso il quale leggere la città: sono le fabbriche gli obiettivi principali dei bombardamenti ed è nelle fabbriche che si combatte una guerra di liberazione tra classe operaia e autorità, di fabbrica e cittadine. La fabbrica diventa il luogo di "usurpazione del potere" di cui parla Weber, il palcoscenico in cui si tengono i diversi episodi della guerra: scioperi, deportazioni, occupazioni .... Il modello della città qui rappresentata non è una semplice visualizzazione ma un sistema informativo dove la realtà modellata è rappresentata da oggetti, che fanno da teatro allo svolgimento di avvenimenti con una precisa collocazione cronologica, al cui interno è possibile effettuare operazioni di selezione di render statici (immagini), di filmati precalcolati (animazioni) e di scenari navigabili interattivamente oltre ad attività di ricerca di fonti bibliografiche e commenti di studiosi segnatamente legati all'evento in oggetto. Obiettivo di questo lavoro è far interagire, attraverso diversi progetti, le discipline storiche e l’informatica, nelle diverse opportunità tecnologiche che questa presenta. Le possibilità di ricostruzione offerte dal 3D vengono così messe a servizio della ricerca, offrendo una visione integrale in grado di avvicinarci alla realtà dell’epoca presa in considerazione e convogliando in un’unica piattaforma espositiva tutti i risultati. Divulgazione Progetto Mappa Informativa Multimediale Torino 1945 Sul piano pratico il progetto prevede una interfaccia navigabile (tecnologia Flash) che rappresenti la pianta della città dell’epoca, attraverso la quale sia possibile avere una visione dei luoghi e dei tempi in cui la Liberazione prese forma, sia a livello concettuale, sia a livello pratico. Questo intreccio di coordinate nello spazio e nel tempo non solo migliora la comprensione dei fenomeni, ma crea un maggiore interesse sull’argomento attraverso l’utilizzo di strumenti divulgativi di grande efficacia (e appeal) senza perdere di vista la necessità di valicare le tesi storiche proponendosi come piattaforma didattica. Un tale contesto richiede uno studio approfondito degli eventi storici al fine di ricostruire con chiarezza una mappa della città che sia precisa sia topograficamente sia a livello di navigazione multimediale. La preparazione della cartina deve seguire gli standard del momento, perciò le soluzioni informatiche utilizzate sono quelle fornite da Adobe Illustrator per la realizzazione della topografia, e da Macromedia Flash per la creazione di un’interfaccia di navigazione. La base dei dati descrittivi è ovviamente consultabile essendo contenuta nel supporto media e totalmente annotata nella bibliografia. È il continuo evolvere delle tecnologie d'informazione e la massiccia diffusione dell’uso dei computer che ci porta a un cambiamento sostanziale nello studio e nell’apprendimento storico; le strutture accademiche e gli operatori economici hanno fatto propria la richiesta che giunge dall'utenza (insegnanti, studenti, operatori dei Beni Culturali) di una maggiore diffusione della conoscenza storica attraverso la sua rappresentazione informatizzata. Sul fronte didattico la ricostruzione di una realtà storica attraverso strumenti informatici consente anche ai non-storici di toccare con mano quelle che sono le problematiche della ricerca quali fonti mancanti, buchi della cronologia e valutazione della veridicità dei fatti attraverso prove. Le tecnologie informatiche permettono una visione completa, unitaria ed esauriente del passato, convogliando tutte le informazioni su un'unica piattaforma, permettendo anche a chi non è specializzato di comprendere immediatamente di cosa si parla. Il miglior libro di storia, per sua natura, non può farlo in quanto divide e organizza le notizie in modo diverso. In questo modo agli studenti viene data l'opportunità di apprendere tramite una rappresentazione diversa rispetto a quelle a cui sono abituati. La premessa centrale del progetto è che i risultati nell'apprendimento degli studenti possono essere migliorati se un concetto o un contenuto viene comunicato attraverso più canali di espressione, nel nostro caso attraverso un testo, immagini e un oggetto multimediale. Didattica La Conceria Fiorio è uno dei luoghi-simbolo della Resistenza torinese. Il progetto è una ricostruzione in realtà virtuale della Conceria Fiorio di Torino. La ricostruzione serve a arricchire la cultura storica sia a chi la produce, attraverso una ricerca accurata delle fonti, sia a chi può poi usufruirne, soprattutto i giovani, che, attratti dall’aspetto ludico della ricostruzione, apprendono con più facilità. La costruzione di un manufatto in 3D fornisce agli studenti le basi per riconoscere ed esprimere la giusta relazione fra il modello e l’oggetto storico. Le fasi di lavoro attraverso cui si è giunti alla ricostruzione in 3D della Conceria: . una ricerca storica approfondita, basata sulle fonti, che possono essere documenti degli archivi o scavi archeologici, fonti iconografiche, cartografiche, ecc.; . La modellazione degli edifici sulla base delle ricerche storiche, per fornire la struttura geometrica poligonale che permetta la navigazione tridimensionale; . La realizzazione, attraverso gli strumenti della computer graphic della navigazione in 3D. Unreal Technology è il nome dato al motore grafico utilizzato in numerosi videogiochi commerciali. Una delle caratteristiche fondamentali di tale prodotto è quella di avere uno strumento chiamato Unreal editor con cui è possibile costruire mondi virtuali, e che è quello utilizzato per questo progetto. UnrealEd (Ued) è il software per creare livelli per Unreal e i giochi basati sul motore di Unreal. E’ stata utilizzata la versione gratuita dell’editor. Il risultato finale del progetto è un ambiente virtuale navigabile raffigurante una ricostruzione accurata della Conceria Fiorio ai tempi della Resistenza. L’utente può visitare l’edificio e visualizzare informazioni specifiche su alcuni punti di interesse. La navigazione viene effettuata in prima persona, un processo di “spettacolarizzazione” degli ambienti visitati attraverso un arredamento consono permette all'utente una maggiore immersività rendendo l’ambiente più credibile e immediatamente codificabile. L’architettura Unreal Technology ha permesso di ottenere un buon risultato in un tempo brevissimo, senza che fossero necessari interventi di programmazione. Questo motore è, quindi, particolarmente adatto alla realizzazione rapida di prototipi di una discreta qualità, La presenza di un certo numero di bug lo rende, però, in parte inaffidabile. Utilizzare un editor da videogame per questa ricostruzione auspica la possibilità di un suo impiego nella didattica, quello che le simulazioni in 3D permettono nel caso specifico è di permettere agli studenti di sperimentare il lavoro della ricostruzione storica, con tutti i problemi che lo storico deve affrontare nel ricreare il passato. Questo lavoro vuole essere per gli storici una esperienza nella direzione della creazione di un repertorio espressivo più ampio, che includa gli ambienti tridimensionali. Il rischio di impiegare del tempo per imparare come funziona questa tecnologia per generare spazi virtuali rende scettici quanti si impegnano nell'insegnamento, ma le esperienze di progetti sviluppati, soprattutto all’estero, servono a capire che sono un buon investimento. Il fatto che una software house, che crea un videogame di grande successo di pubblico, includa nel suo prodotto, una serie di strumenti che consentano all'utente la creazione di mondi propri in cui giocare, è sintomatico che l'alfabetizzazione informatica degli utenti medi sta crescendo sempre più rapidamente e che l'utilizzo di un editor come Unreal Engine sarà in futuro una attività alla portata di un pubblico sempre più vasto. Questo ci mette nelle condizioni di progettare moduli di insegnamento più immersivi, in cui l'esperienza della ricerca e della ricostruzione del passato si intreccino con lo studio più tradizionale degli avvenimenti di una certa epoca. I mondi virtuali interattivi vengono spesso definiti come la forma culturale chiave del XXI secolo, come il cinema lo è stato per il XX. Lo scopo di questo lavoro è stato quello di suggerire che vi sono grosse opportunità per gli storici impiegando gli oggetti e le ambientazioni in 3D, e che essi devono coglierle. Si consideri il fatto che l’estetica abbia un effetto sull’epistemologia. O almeno sulla forma che i risultati delle ricerche storiche assumono nel momento in cui devono essere diffuse. Un’analisi storica fatta in maniera superficiale o con presupposti errati può comunque essere diffusa e avere credito in numerosi ambienti se diffusa con mezzi accattivanti e moderni. Ecco perchè non conviene seppellire un buon lavoro in qualche biblioteca, in attesa che qualcuno lo scopra. Ecco perchè gli storici non devono ignorare il 3D. La nostra capacità, come studiosi e studenti, di percepire idee ed orientamenti importanti dipende spesso dai metodi che impieghiamo per rappresentare i dati e l’evidenza. Perché gli storici possano ottenere il beneficio che il 3D porta con sè, tuttavia, devono sviluppare un’agenda di ricerca volta ad accertarsi che il 3D sostenga i loro obiettivi di ricercatori e insegnanti. Una ricostruzione storica può essere molto utile dal punto di vista educativo non sono da chi la visita ma, anche da chi la realizza. La fase di ricerca necessaria per la ricostruzione non può fare altro che aumentare il background culturale dello sviluppatore. Conclusioni La cosa più importante è stata la possibilità di fare esperienze nell’uso di mezzi di comunicazione di questo genere per raccontare e far conoscere il passato. Rovesciando il paradigma conoscitivo che avevo appreso negli studi umanistici, ho cercato di desumere quelle che potremo chiamare “leggi universali” dai dati oggettivi emersi da questi esperimenti. Da punto di vista epistemologico l’informatica, con la sua capacità di gestire masse impressionanti di dati, dà agli studiosi la possibilità di formulare delle ipotesi e poi accertarle o smentirle tramite ricostruzioni e simulazioni. Il mio lavoro è andato in questa direzione, cercando conoscere e usare strumenti attuali che nel futuro avranno sempre maggiore presenza nella comunicazione (anche scientifica) e che sono i mezzi di comunicazione d’eccellenza per determinate fasce d’età (adolescenti). Volendo spingere all’estremo i termini possiamo dire che la sfida che oggi la cultura visuale pone ai metodi tradizionali del fare storia è la stessa che Erodoto e Tucidide contrapposero ai narratori di miti e leggende. Prima di Erodoto esisteva il mito, che era un mezzo perfettamente adeguato per raccontare e dare significato al passato di una tribù o di una città. In un mondo post letterario la nostra conoscenza del passato sta sottilmente mutando nel momento in cui lo vediamo rappresentato da pixel o quando le informazioni scaturiscono non da sole, ma grazie all’interattività con il mezzo. La nostra capacità come studiosi e studenti di percepire idee ed orientamenti importanti dipende spesso dai metodi che impieghiamo per rappresentare i dati e l’evidenza. Perché gli storici possano ottenere il beneficio sottinteso al 3D, tuttavia, devono sviluppare un’agenda di ricerca volta ad accertarsi che il 3D sostenga i loro obiettivi di ricercatori e insegnanti. Le esperienze raccolte nelle pagine precedenti ci portano a pensare che in un futuro non troppo lontano uno strumento come il computer sarà l’unico mezzo attraverso cui trasmettere conoscenze, e dal punto di vista didattico la sua interattività consente coinvolgimento negli studenti come nessun altro mezzo di comunicazione moderno.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

English: The assessment of safety in existing bridges and viaducts led the Ministry of Public Works of the Netherlands to finance a specific campaing aimed at the study of the response of the elements of these infrastructures. Therefore, this activity is focused on the investigation of the behaviour of reinforced concrete slabs under concentrated loads, adopting finite element modeling and comparison with experimental results. These elements are characterized by shear behaviour and crisi, whose modeling is, from a computational point of view, a hard challeng, due to the brittle behavior combined with three-dimensional effects. The numerical modeling of the failure is studied through Sequentially Linear Analysis (SLA), an alternative Finite Element method, with respect to traditional incremental and iterative approaches. The comparison between the two different numerical techniques represents one of the first works and comparisons in a three-dimensional environment. It's carried out adopting one of the experimental test executed on reinforced concrete slabs as well. The advantage of the SLA is to avoid the well known problems of convergence of typical non-linear analysis, by directly specifying a damage increment, in terms of reduction of stiffness and resistance in particular finite element, instead of load or displacement increasing on the whole structure . For the first time, particular attention has been paid to specific aspects of the slabs, like an accurate constraints modeling and sensitivity of the solution with respect to the mesh density. This detailed analysis with respect to the main parameters proofed a strong influence of the tensile fracture energy, mesh density and chosen model on the solution in terms of force-displacement diagram, distribution of the crack patterns and shear failure mode. The SLA showed a great potential, but it requires a further developments for what regards two aspects of modeling: load conditions (constant and proportional loads) and softening behaviour of brittle materials (like concrete) in the three-dimensional field, in order to widen its horizons in these new contexts of study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This doctoral thesis was focused on the investigation of enantiomeric and non-enantiomeric biogenic organic compound (BVOC) emissions from both leaf and canopy scales in different environments. In addition, the anthropogenic compounds benzene, toluene, ethylbenzene, and xylenes (BTEX) were studied. BVOCs are emitted into the lower troposphere in large quantities (ca. 1150 Tg C ·yr-1), approximately an order of magnitude greater than the anthropogenic VOCs. BVOCs are particularly important in tropospheric chemistry because of their impact on ozone production and secondary organic aerosol formation or growth. The BVOCs examined in this study were: isoprene, (-)/ (+)-α-pinene, (-)/ (+)-ß-pinene, Δ-3-carene, (-)/ (+)-limonene, myrcene, eucalyptol and camphor, as these were the most abundant BVOCs observed both in the leaf cuvette study and the ambient measurements. In the laboratory cuvette studies, the sensitivity of enantiomeric enrichment change from the leaf emission has been examined as a function of light (0-1600 PAR) and temperature (20-45°C). Three typical Mediterranean plant species (Quercus ilex L., Rosmarinus officinalis L., Pinus halepensis Mill.) with more than three individuals of each have been investigated using a dynamic enclosure cuvette. The terpenoid compound emission rates were found to be directly linked to either light and temperature (e.g. Quercus ilex L.) or mainly to temperature (e.g. Rosmarinus officinalis L., Pinus halepensis Mill.). However, the enantiomeric signature showed no clear trend in response to either the light or temperature; moreover a large variation of enantiomeric enrichment was found during the experiment. This enantiomeric signature was also used to distinguish chemotypes beyond the normal achiral chemical composition method. The results of nineteen Quercus ilex L. individuals, screened under standard conditions (30°C and 1000 PAR) showed four different chemotypes, whereas the traditional classification showed only two. An enclosure branch cuvette set-up was applied in the natural boreal forest environment from four chemotypes of Scots pine (Pinus sylvestris) and one chemotype of Norway spruce (Picea abies) and the direct emissions compared with ambient air measurements above the canopy during the HUMPPA-COPEC 2010 summer campaign. The chirality of a-pinene was dominated by (+)-enantiomers from Scots pine while for Norway spruce the chirality was found to be opposite (i.e. Abstract II (-)-enantiomer enriched) becoming increasingly enriched in the (-)-enantiomer with light. Field measurements over a Spanish stone pine forest were performed to examine the extent of seasonal changes in enantiomeric enrichment (DOMINO 2008). These showed clear differences in chirality of monoterpene emissions. In wintertime the monoterpene (-)-a-pinene was found to be in slight enantiomeric excess over (+)-a-pinene at night but by day the measured ratio was closer to one i.e. racemic. Samples taken the following summer in the same location showed much higher monoterpene mixing ratios and revealed a strong enantiomeric excess of (-)-a-pinene. This indicated a strong seasonal variance in the enantiomeric emission ratio which was not manifested in the day/night temperature cycles in wintertime. A clear diurnal cycle of enantiomeric enrichment in a-pinene was also found over a French oak forest and the boreal forest. However, while in the boreal forest (-)-a-pinene enrichment increased around the time of maximum light and temperature, the French forest showed the opposite tendency with (+)-a-pinene being favored. For the two field campaigns (DOMINO 2008 and HUMPPA-COPEC 2010), the BTEX were also investigated. For the DOMINO campaign, mixing ratios of the xylene isomers (meta- and para-) and ethylbenzene, which are all well resolved on the ß-cyclodextrin column, were exploited to estimate average OH radical exposures to VOCs from the Huelva industrial area. These were compared to empirical estimates of OH based on JNO2 measured at the site. The deficiencies of each estimation method are discussed. For HUMPPA-COPEC campaign, benzene and toluene mixing ratios can clearly define the air mass influenced by the biomass burning pollution plume from Russia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: Acupuncture is one of the complementary medicine therapies with the greatest demand in Switzerland and many other countries in the West and in Asia. Over the past decades, the pool of scientific literature in acupuncture has markedly increased. The diagnostic methods upon which acupuncture treatment is based, have only been addressed sporadically in scientific journals. The goal of this study is to assess the use of different diagnostic methods in the acupuncture practices and to investigate similarities and differences in using these diagnostic methods between physician and non-physician acupuncturists. Methods: 44 physician acupuncturists with certificates of competence in acupuncture – traditional chinese medicine (TCM) from ASA (Assoziation Schweizer Ärztegesellschaften für Akupunktur und Chinesische Medizin: the Association of Swiss Medical Societies for Acupuncture and Chinese Medicine) and 33 non-physician acupuncturists listed in the EMR (Erfahrungsmedizinisches Register: a national register, which assigns a quality label for CAM therapists in complementary and alternative medicine) in the cantons Basel-Stadt and Basel-Land were asked to fill out a questionnaire on diagnostic methods. The responder rate was 46.8% (69.7% non-physician acupuncturists and 29, 5% physician acupuncturists). Results: The results show that both physician and non-physician acupuncturists take patients’ medical history (94%), use pulse diagnosis (89%), tongue diagnosis (83%) and palpation of body and ear acupuncture points (81%) as diagnostic methods to guide their acupuncture treatments. Between the two groups, there were significant differences in the diagnostic tools being used. Physician acupuncturists do examine their patients significantly more often with western medical methods (p<.05) than this is the case for nonphysician acupuncturists. Non-physician acupuncturists use pulse diagnosis more often than physicians (p<.05). A highly significant difference was observed in the length of time spent with collecting patients’ medical history, where nonphysician acupuncturists clearly spent more time (p<.001). Conclusion: Depending on the educational background of the acupuncturist, different diagnostic methods are used for making the diagnosis. Especially the more time consuming methods like a comprehensive anamnesis and pulse diagnosis are more frequently employed by non-physician practitioners. Further studies will clarify if these results are valid for Switzerland in general, and to what extent the differing use of diagnostic methods has an impact on the diagnosis itself and on the resulting treatment methods, as well as on the treatment success and the patients’ satisfaction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Non-uniform sampling (NUS) has been established as a route to obtaining true sensitivity enhancements when recording indirect dimensions of decaying signals in the same total experimental time as traditional uniform incrementation of the indirect evolution period. Theory and experiments have shown that NUS can yield up to two-fold improvements in the intrinsic signal-to-noise ratio (SNR) of each dimension, while even conservative protocols can yield 20-40 % improvements in the intrinsic SNR of NMR data. Applications of biological NMR that can benefit from these improvements are emerging, and in this work we develop some practical aspects of applying NUS nD-NMR to studies that approach the traditional detection limit of nD-NMR spectroscopy. Conditions for obtaining high NUS sensitivity enhancements are considered here in the context of enabling H-1,N-15-HSQC experiments on natural abundance protein samples and H-1,C-13-HMBC experiments on a challenging natural product. Through systematic studies we arrive at more precise guidelines to contrast sensitivity enhancements with reduced line shape constraints, and report an alternative sampling density based on a quarter-wave sinusoidal distribution that returns the highest fidelity we have seen to date in line shapes obtained by maximum entropy processing of non-uniformly sampled data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This project considered the second stage of transforming local administration and public service management to reflect democratic forms of government. In Hungary in the second half of the 1990s more and more public functions delegated to local governments have been handed over to the private or civil sectors. This has led to a relative decrease of municipal functions but not of local governments' responsibilities, requiring them to change their orientation and approach to their work so as to be effective in their new roles of managing these processes rather than traditional bureaucratic administration. Horvath analysed the Anglo-Saxon, French and German models of self-government, identifying the differing aspects emphasised in increasing the private sector's role in the provision of public services, and the influence that this process has on the system of public administration. He then highlighted linkages between actors and local governments in Hungary, concluding that the next necessary step is to develop institutional mechanisms, financial incentives and managerial practices to utilise the full potential of this process. Equally important is the need for conscious avoidance of restrictive barriers and unintended consequences, and for local governments to confront the social conflicts that have emerged in parallel with privatisation. A further aspect considered was a widening of the role of functional governance at local level in the field of human services. A number of different special purpose bodies have been set up in Hungary, but the results of their work are unclear and Horvath feels that this institutionalisation of symbiosis is not the right path in Hungary today. He believes that the change from local government to local governance will require the formulation of specific public policy, the relevance of which can be proven by processes supported with actions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Grigorij Kreidlin (Russia). A Comparative Study of Two Semantic Systems: Body Russian and Russian Phraseology. Mr. Kreidlin teaches in the Department of Theoretical and Applied Linguistics of the State University of Humanities in Moscow and worked on this project from August 1996 to July 1998. The classical approach to non-verbal and verbal oral communication is based on a traditional separation of body and mind. Linguists studied words and phrasemes, the products of mind activities, while gestures, facial expressions, postures and other forms of body language were left to anthropologists, psychologists, physiologists, and indeed to anyone but linguists. Only recently have linguists begun to turn their attention to gestures and semiotic and cognitive paradigms are now appearing that raise the question of designing an integral model for the unified description of non-verbal and verbal communicative behaviour. This project attempted to elaborate lexical and semantic fragments of such a model, producing a co-ordinated semantic description of the main Russian gestures (including gestures proper, postures and facial expressions) and their natural language analogues. The concept of emblematic gestures and gestural phrasemes and of their semantic links permitted an appropriate description of the transformation of a body as a purely physical substance into a body as a carrier of essential attributes of Russian culture - the semiotic process called the culturalisation of the human body. Here the human body embodies a system of cultural values and displays them in a text within the area of phraseology and some other important language domains. The goal of this research was to develop a theory that would account for the fundamental peculiarities of the process. The model proposed is based on the unified lexicographic representation of verbal and non-verbal units in the Dictionary of Russian Gestures, which the Mr. Kreidlin had earlier complied in collaboration with a group of his students. The Dictionary was originally oriented only towards reflecting how the lexical competence of Russian body language is represented in the Russian mind. Now a special type of phraseological zone has been designed to reflect explicitly semantic relationships between the gestures in the entries and phrasemes and to provide the necessary information for a detailed description of these. All the definitions, rules of usage and the established correlations are written in a semantic meta-language. Several classes of Russian gestural phrasemes were identified, including those phrasemes and idioms with semantic definitions close to those of the corresponding gestures, those phraseological units that have lost touch with the related gestures (although etymologically they are derived from gestures that have gone out of use), and phrasemes and idioms which have semantic traces or reflexes inherited from the meaning of the related gestures. The basic assumptions and practical considerations underlying the work were as follows. (1) To compare meanings one has to be able to state them. To state the meaning of a gesture or a phraseological expression, one needs a formal semantic meta-language of propositional character that represents the cognitive and mental aspects of the codes. (2) The semantic contrastive analysis of any semiotic codes used in person-to-person communication also requires a single semantic meta-language, i.e. a formal semantic language of description,. This language must be as linguistically and culturally independent as possible and yet must be open to interpretation through any culture and code. Another possible method of conducting comparative verbal-non-verbal semantic research is to work with different semantic meta-languages and semantic nets and to learn how to combine them, translate from one to another, etc. in order to reach a common basis for the subsequent comparison of units. (3) The practical work in defining phraseological units and organising the phraseological zone in the Dictionary of Russian Gestures unexpectedly showed that semantic links between gestures and gestural phrasemes are reflected not only in common semantic elements and syntactic structure of semantic propositions, but also in general and partial cognitive operations that are made over semantic definitions. (4) In comparative semantic analysis one should take into account different values and roles of inner form and image components in the semantic representation of non-verbal and verbal units. (5) For the most part, gestural phrasemes are direct semantic derivatives of gestures. The cognitive and formal techniques can be regarded as typological features for the future functional-semantic classification of gestural phrasemes: two phrasemes whose meaning can be obtained by the same cognitive or purely syntactic operations (or types of operations) over the meanings of the corresponding gestures, belong by definition to one and the same class. The nature of many cognitive operations has not been studied well so far, but the first steps towards its comprehension and description have been taken. The research identified 25 logically possible classes of relationships between a gesture and a gestural phraseme. The calculation is based on theoretically possible formal (set-theory) correlations between signifiers and signified of the non-verbal and verbal units. However, in order to examine which of them are realised in practice a complete semantic and lexicographic description of all (not only central) everyday emblems and gestural phrasemes is required and this unfortunately does not yet exist. Mr. Kreidlin suggests that the results of the comparative analysis of verbal and non-verbal units could also be used in other research areas such as the lexicography of emotions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An accurate assessment of the computer skills of students is a pre-requisite for the success of any e-learning interventions. The aim of the present study was to assess objectively the computer literacy and attitudes in a group of Greek post-graduate students, using a task-oriented questionnaire developed and validated in the University of Malmö, Sweden. 50 post-graduate students in the Athens University School of Dentistry in April 2005 took part in the study. A total competence score of 0-49 was calculated. Socio-demographic characteristics were recorded. Attitudes towards computer use were assessed. Descriptive statistics and linear regression modeling were employed for data analysis. Total competence score was normally distributed (Shapiro-Wilk test: W = 0.99, V = 0.40, P = 0.97) and ranged from 5 to 42.5, with a mean of 22.6 (+/-8.4). Multivariate analysis revealed 'gender', 'e-mail ownership' and 'enrollment in non-clinical programs' as significant predictors of computer literacy. Conclusively, computer literacy of Greek post-graduate dental students was increased amongst males, students in non-clinical programs and those with more positive attitudes towards the implementation of computer assisted learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Some schools do not have ideal access to laboratory space and supplies. Computer simulations of laboratory activities can be a cost-effective way of presenting experiences to students, but are those simulations as effective at supplementing content concepts? This study compared the use of traditional lab activities illustrating the principles of cell respiration and photosynthesis in an introductory high school biology class with virtual simulations of the same activities. Additionally student results were analyzed to assess if student conceptual understanding was affected by the complexity of the simulation. Although all student groups posted average gain increases between the pre and post-tests coupled with positive effect sizes, students who completed the wet lab version of the activity consistently outperformed the students who completed the virtual simulation of the same activity. There was no significant difference between the use of more or less complex simulations. Students also tended to rate the wet lab experience higher on a motivation and interest inventory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study develops an automated analysis tool by combining total internal reflection fluorescence microscopy (TIRFM), an evanescent wave microscopic imaging technique to capture time-sequential images and the corresponding image processing Matlab code to identify movements of single individual particles. The developed code will enable us to examine two dimensional hindered tangential Brownian motion of nanoparticles with a sub-pixel resolution (nanoscale). The measured mean square displacements of nanoparticles are compared with theoretical predictions to estimate particle diameters and fluid viscosity using a nonlinear regression technique. These estimated values will be confirmed by the diameters and viscosities given by manufacturers to validate this analysis tool. Nano-particles used in these experiments are yellow-green polystyrene fluorescent nanospheres (200 nm, 500 nm and 1000 nm in diameter (nominal); 505 nm excitation and 515 nm emission wavelengths). Solutions used in this experiment are de-ionized (DI) water, 10% d-glucose and 10% glycerol. Mean square displacements obtained near the surface shows significant deviation from theoretical predictions which are attributed to DLVO forces in the region but it conforms to theoretical predictions after ~125 nm onwards. The proposed automation analysis tool will be powerfully employed in the bio-application fields needed for examination of single protein (DNA and/or vesicle) tracking, drug delivery, and cyto-toxicity unlike the traditional measurement techniques that require fixing the cells. Furthermore, this tool can be also usefully applied for the microfluidic areas of non-invasive thermometry, particle tracking velocimetry (PTV), and non-invasive viscometry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mower is a micro-architecture technique which targets branch misprediction penalties in superscalar processors. It speeds-up the misprediction recovery process by dynamically evicting stale instructions and fixing the RAT (Register Alias Table) using explicit branch dependency tracking. Tracking branch dependencies is accomplished by using simple bit matrices. This low-overhead technique allows overlapping of the recovery process with instruction fetching, renaming and scheduling from the correct path. Our evaluation of the mechanism indicates that it yields performance very close to ideal recovery and provides up to 5% speed-up and 2% reduction in power consumption compared to a traditional recovery mechanism using a reorder buffer and a walker. The simplicity of the mechanism should permit easy implementation of Mower in an actual processor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An electrospray source has been developed using a novel new fluid that is both magnetic and conductive. Unlike conventional electrospray sources that required microfabricated structures to support the fluid to be electrosprayed, this new electrospray fluid utilizes the Rosensweig instability to create the structures in the magnetic fluid when an external magnetic field was applied. Application of an external electric field caused these magnetic fluid structures to spray. These fluid based structures were found to spray at a lower onset voltage than was predicted for electrospray sources with solid structures of similar geometry. These fluid based structures were also found to be resilient to damage, unlike the solid structures found in traditional electrospray sources. Further, experimental studies of magnetic fluids in non-uniform magnetic fields were conducted. The modes of Rosensweig instabilities have been studied in-depth when created by uniform magnetic fields, but little to no studies have been performed on Rosensweig instabilities formed due to non-uniform magnetic fields. The measured spacing of the cone-like structures of ferrofluid, in a non-uniform magnetic field, were found to agree with a proposed theoretical model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional courses and textbooks in occupational safety emphasize rules, standards, and guidelines. This paper describes the early stage of a project to upgrade a traditional college course on fire protection by incorporating learning materials to develop the higher-level cognitive ability known as synthesis. Students will be challenged to synthesize textbook information into fault tree diagrams. The paper explains the place of synthesis in Bloom’s taxonomy of cognitive abilities and the utility of fault trees diagrams as a tool for synthesis. The intended benefits for students are: improved abilities to synthesize, a deeper understanding of fire protection practices, ability to construct fault trees for a wide range of undesired occurrences, and perhaps recognition that heavy reliance on memorization is the hard way to learn occupational safety and health.