888 resultados para feeling


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Em conformidade com a dúvida de seu título, o difundido livro The New Brutalism: Ethic or Aesthetic? de Reyner Banham não é possível explicar o Brutalismo como manifestação artística coesa, dotada de consistência e reprodutibilidade formal. A citação de arquitetos famosos, porém divergentes, parece associar e comparar obras ásperas com a pretensão de reerguer uma arquitetura moderna considerada ascética, monótona e insuficiente e gera um teoricismo da aparência crua e do moralismo dos objetos. Discurso que oculta, ou desvia a atenção do retorno artístico à sublimidade e construção artesanal. O Brutalismo é aceito como evolução natural dos estágios modernos anteriores e sanciona artefatos toscos, pesados e inacabados como se fossem filiados ao processo moderno desinfestado. Esconde contradições e disfarça seu rompimento com o moderno para prolongar a expressão Movimento moderno. Mas o objeto claro, econômico e preciso é repudiado pelo consumidor e, por ser pouco representativo, o artista faz sua maquiagem com episódios contrastantes e monumentais na informalidade das cidades espontâneas. No entanto, parece possível suspender a noção positiva e corretiva do Brutalismo para entendê-lo como um recuo artístico vulgarizador que despreza aperfeiçoamento e afronta a atitude moderna com banalização conceptiva, exagero, figuralidade, musculação estrutural, grandeza tectônica, rudimento e rudeza. Assim, moralismo, retorno rústico e originalidade desqualificam a expressão International Style entendida como a culminação da arquitetura moderna do pós-guerra, ao depreciá-la como decadente, como produto imobiliário, comercial e corporativo a serviço do capital. Essa interpretação desvela uma crítica anti-industrial, portanto antimodernista e diversa da pós-modernidade, porém contestadora e realista para fornecer imagens à cultura e aos insensíveis à estrutura da forma moderna. Diverso da pós-modernidade pela dependência ao moderno e ausência de apelo popular. Tornada insignificante a configuração oportuna do artefato, o arquiteto tenta reter sua notabilidade artística, ou o prestígio que parece enfraquecer na aparência símile da especificação de catálogo, no rigor modular. Indispõe-se e repudia componentes, Standards e acabamentos impessoais da indústria da construção para insistir em autoria e inspiração, mas repete cacoetes estilísticos de época e o inexplicável uso intensivo de concreto bruto e aparente para sentir-se engajado e atualizado. Porém, é necessário distinguir obras de aparência severa concebidas pela atitude moderna mais autêntica das de concreto aparente em tipos ou configurações aberrantes. Para avançar na discussão do Brutalismo propõe-se entender este fenômeno com a substituição do juízo estético moderno de sentido visual postulado por Immanuel Kant (1724-1804) por um sentimento estético fácil e relacionado com a sensação da empatia, com a Einfühlung de Robert Vischer (1847-1933). Na época da cultura de massas, admite-se o rebaixamento das exigências no artefato e a adaptação brutalista com a transfiguração dos processos de arquitetura moderna. Assim, a forma é substituída pela figura ou pelo resumo material; a estrutura formal subjacente pelo ritmo e exposição da estrutura física; o reconhecimento visual pelo entusiasmo psicológico ou pelo impulso dionisíaco; a concepção substituída pelo partido, ou, ainda, pelo conceito; a sistematização e a ordem pela moldagem e a organização; a abstração e síntese pela originalidade e essencialidade, o sentido construtivo pela honestidade material; a identidade das partes pela fundição ou pela unicidade objetal e a residência pela cabana primitiva.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis consists of four self-contained essays in economics. Tournaments and unfair treatment. This paper introduces the negative feelings associated with the perception of being unfairly treated into a tournament model and examines the impact of these perceptions on workers’ efforts and their willingness to work overtime. The effect of unfair treatment on workers’ behavior is ambiguous in the model in that two countervailing effects arise: a negative impulsive effect and a positive strategic effect. The impulsive effect implies that workers react to the perception of being unfairly treated by reducing their level of effort. The strategic effect implies that workers raise this level in order to improve their career opportunities and thereby avoid feeling even more unfairly treated in the future. An empirical test of the model using survey data from a Swedish municipal utility shows that the overall effect is negative. This suggests that employers should consider the negative impulsive effect of unfair treatment on effort and overtime in designing contracts and determining on promotions. Late careers in Sweden between 1970 and 2000. In this essay Swedish workers’ late careers between 1970 and 2000 are studied. The aim is to examine older workers’ career patterns and whether they have changed during this period. For example, is there a difference in career mobility or labor market exiting between cohorts? What affects the late career, and does this differ between cohorts? The analysis shows that between 1970 and 2000 the late careers of Swedish workers comprised of few job changes and consisted more of “trying to keep the job you had in your mid-fifties” than of climbing up the promotion ladder. There are no cohort differences in this pattern. Also a large fraction of the older workers exited the labor market before the normal retirement age of 65. During the 1970s and first part of the 1980s, 56 percent of the older workers made an early exit and the average drop-out age was 63. During the late 1980s and the 1990s the share of old workers who made an early exit had risen to 76 percent and the average drop-out age had dropped to 61.5. Different factors have affected the probabilities of an early exit between 1970 and 2000. For example, skills did affect the risk of exiting the labor market during the 1970s and up to the mid-1980s, but not in the late 1980s or the 1990s. During the first period old workers in the lowest occupations or with the lowest level of education were more likely to exit the labor market than more highly skilled workers. In the second period old workers at all levels of skill had the same probability of leaving the labor market. The growth and survival of establishments: does gender segregation matter? We empirically examine the employment dynamics that arise in Becker’s (1957) model of labor market discrimination. According to the model, firms that employ a large fraction of women will be relatively more profitable due to lower wage costs, and thus enjoy a greater probability of surviving and growing by underselling other firms in the competitive product market. In order to test these implications, we use a unique Swedish matched employer-employee data set. We find that female-dominated establishments do not enjoy any greater probability of surviving and do not grow faster than other establishments. Additionally, we find that integrated establishments, in terms of gender, age and education levels, are more successful than other establishments. Thus, attempts by legislators to integrate firms along all dimensions of diversity may have positive effects on the growth and survival of firms. Risk and overconfidence – Gender differences in financial decision-making as revealed in the TV game-show Jeopardy. We have used unique data from the Swedish version of the TV-show Jeopardy to uncover gender differences in financial decision-making by looking at the contestants’ final wagering strategies. After ruling out empirical best-responses, which do appear in Jeopardy in the US, a simple model is derived to show that risk preferences, the subjective and objective probabilities of answering correctly (individual and group competence), determine wagering strategies. The empirical model shows that, on average, women adopt more conservative and diversified strategies, while men’s strategies aim for the greatest gains. Further, women’s strategies are more responsive to the competence measures, which suggests that they are less overconfident. Together these traits make women more successful players. These results are in line with earlier findings on gender and financial trading.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L’ermeneutica filosofica di Hans-Georg Gadamer – indubbiamente uno dei capisaldi del pensiero novecentesco – rappresenta una filosofia molto composita, sfaccettata e articolata, per così dire formata da una molteplicità di dimensioni diverse che si intrecciano l’una con l’altra. Ciò risulta evidente già da un semplice sguardo alla composizione interna della sua opera principale, Wahrheit und Methode (1960), nella quale si presenta una teoria del comprendere che prende in esame tre differenti dimensioni dell’esperienza umana – arte, storia e linguaggio – ovviamente concepite come fondamentalmente correlate tra loro. Ma questo quadro d’insieme si complica notevolmente non appena si prendano in esame perlomeno alcuni dei numerosi contributi che Gadamer ha scritto e pubblicato prima e dopo il suo opus magnum: contributi che testimoniano l’importante presenza nel suo pensiero di altre tematiche. Di tale complessità, però, non sempre gli interpreti di Gadamer hanno tenuto pienamente conto, visto che una gran parte dei contributi esegetici sul suo pensiero risultano essenzialmente incentrati sul capolavoro del 1960 (ed in particolare sui problemi della legittimazione delle Geisteswissenschaften), dedicando invece minore attenzione agli altri percorsi che egli ha seguito e, in particolare, alla dimensione propriamente etica e politica della sua filosofia ermeneutica. Inoltre, mi sembra che non sempre si sia prestata la giusta attenzione alla fondamentale unitarietà – da non confondere con una presunta “sistematicità”, da Gadamer esplicitamente respinta – che a dispetto dell’indubbia molteplicità ed eterogeneità del pensiero gadameriano comunque vige al suo interno. La mia tesi, dunque, è che estetica e scienze umane, filosofia del linguaggio e filosofia morale, dialogo con i Greci e confronto critico col pensiero moderno, considerazioni su problematiche antropologiche e riflessioni sulla nostra attualità sociopolitica e tecnoscientifica, rappresentino le diverse dimensioni di un solo pensiero, le quali in qualche modo vengono a convergere verso un unico centro. Un centro “unificante” che, a mio avviso, va individuato in quello che potremmo chiamare il disagio della modernità. In altre parole, mi sembra cioè che tutta la riflessione filosofica di Gadamer, in fondo, scaturisca dalla presa d’atto di una situazione di crisi o disagio nella quale si troverebbero oggi il nostro mondo e la nostra civiltà. Una crisi che, data la sua profondità e complessità, si è per così dire “ramificata” in molteplici direzioni, andando ad investire svariati ambiti dell’esistenza umana. Ambiti che pertanto vengono analizzati e indagati da Gadamer con occhio critico, cercando di far emergere i principali nodi problematici e, alla luce di ciò, di avanzare proposte alternative, rimedi, “correttivi” e possibili soluzioni. A partire da una tale comprensione di fondo, la mia ricerca si articola allora in tre grandi sezioni dedicate rispettivamente alla pars destruens dell’ermeneutica gadameriana (prima e seconda sezione) ed alla sua pars costruens (terza sezione). Nella prima sezione – intitolata Una fenomenologia della modernità: i molteplici sintomi della crisi – dopo aver evidenziato come buona parte della filosofia del Novecento sia stata dominata dall’idea di una crisi in cui verserebbe attualmente la civiltà occidentale, e come anche l’ermeneutica di Gadamer possa essere fatta rientrare in questo discorso filosofico di fondo, cerco di illustrare uno per volta quelli che, agli occhi del filosofo di Verità e metodo, rappresentano i principali sintomi della crisi attuale. Tali sintomi includono: le patologie socioeconomiche del nostro mondo “amministrato” e burocratizzato; l’indiscriminata espansione planetaria dello stile di vita occidentale a danno di altre culture; la crisi dei valori e delle certezze, con la concomitante diffusione di relativismo, scetticismo e nichilismo; la crescente incapacità a relazionarsi in maniera adeguata e significativa all’arte, alla poesia e alla cultura, sempre più degradate a mero entertainment; infine, le problematiche legate alla diffusione di armi di distruzione di massa, alla concreta possibilità di una catastrofe ecologica ed alle inquietanti prospettive dischiuse da alcune recenti scoperte scientifiche (soprattutto nell’ambito della genetica). Una volta delineato il profilo generale che Gadamer fornisce della nostra epoca, nella seconda sezione – intitolata Una diagnosi del disagio della modernità: il dilagare della razionalità strumentale tecnico-scientifica – cerco di mostrare come alla base di tutti questi fenomeni egli scorga fondamentalmente un’unica radice, coincidente peraltro a suo giudizio con l’origine stessa della modernità. Ossia, la nascita della scienza moderna ed il suo intrinseco legame con la tecnica e con una specifica forma di razionalità che Gadamer – facendo evidentemente riferimento a categorie interpretative elaborate da Max Weber, Martin Heidegger e dalla Scuola di Francoforte – definisce anche «razionalità strumentale» o «pensiero calcolante». A partire da una tale visione di fondo, cerco quindi di fornire un’analisi della concezione gadameriana della tecnoscienza, evidenziando al contempo alcuni aspetti, e cioè: primo, come l’ermeneutica filosofica di Gadamer non vada interpretata come una filosofia unilateralmente antiscientifica, bensì piuttosto come una filosofia antiscientista (il che naturalmente è qualcosa di ben diverso); secondo, come la sua ricostruzione della crisi della modernità non sfoci mai in una critica “totalizzante” della ragione, né in una filosofia della storia pessimistico-negativa incentrata sull’idea di un corso ineluttabile degli eventi guidato da una razionalità “irrazionale” e contaminata dalla brama di potere e di dominio; terzo, infine, come la filosofia di Gadamer – a dispetto delle inveterate interpretazioni che sono solite scorgervi un pensiero tradizionalista, autoritario e radicalmente anti-illuminista – non intenda affatto respingere l’illuminismo scientifico moderno tout court, né rinnegarne le più importanti conquiste, ma più semplicemente “correggerne” alcune tendenze e recuperare una nozione più ampia e comprensiva di ragione, in grado di render conto anche di quegli aspetti dell’esperienza umana che, agli occhi di una razionalità “limitata” come quella scientista, non possono che apparire come meri residui di irrazionalità. Dopo aver così esaminato nelle prime due sezioni quella che possiamo definire la pars destruens della filosofia di Gadamer, nella terza ed ultima sezione – intitolata Una terapia per la crisi della modernità: la riscoperta dell’esperienza e del sapere pratico – passo quindi ad esaminare la sua pars costruens, consistente a mio giudizio in un recupero critico di quello che egli chiama «un altro tipo di sapere». Ossia, in un tentativo di riabilitazione di tutte quelle forme pre- ed extra-scientifiche di sapere e di esperienza che Gadamer considera costitutive della «dimensione ermeneutica» dell’esistenza umana. La mia analisi della concezione gadameriana del Verstehen e dell’Erfahrung – in quanto forme di un «sapere pratico (praktisches Wissen)» differente in linea di principio da quello teorico e tecnico – conduce quindi ad un’interpretazione complessiva dell’ermeneutica filosofica come vera e propria filosofia pratica. Cioè, come uno sforzo di chiarificazione filosofica di quel sapere prescientifico, intersoggettivo e “di senso comune” effettivamente vigente nella sfera della nostra Lebenswelt e della nostra esistenza pratica. Ciò, infine, conduce anche inevitabilmente ad un’accentuazione dei risvolti etico-politici dell’ermeneutica di Gadamer. In particolare, cerco di esaminare la concezione gadameriana dell’etica – tenendo conto dei suoi rapporti con le dottrine morali di Platone, Aristotele, Kant e Hegel – e di delineare alla fine un profilo della sua ermeneutica filosofica come filosofia del dialogo, della solidarietà e della libertà.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La ricognizione delle opere composte da Filippo Tommaso Marinetti tra il 1909 e il 1912 è sostenuta da una tesi paradossale: il futurismo di Marinetti non sarebbe un'espressione della modernità, bensì una reazione anti-moderna, che dietro a una superficiale ed entusiastica adesione ad alcune parole d'ordine della seconda rivoluzione industriale nasconderebbe un pessimismo di fondo nei confronti dell'uomo e della storia. In questo senso il futurismo diventa un emblema del ritardo culturale e del gattopardismo italiano, e anticipa l’analoga operazione svolta in politica da Mussolini: dietro un’adesione formale ad alcune istanze della modernità, la preservazione dello Status Quo. Marinetti è descritto come un corpo estraneo rispetto alla cultura scientifica del Novecento: un futurista senza futuro (rarissime in Marinetti sono le proiezioni fantascientifiche). Questo aspetto è particolarmente evidente nelle opere prodotte del triennio 1908-1911, che non solo sono molto diverse dalle opere futuriste successive, ma per alcuni aspetti rappresentano una vera e propria antitesi di ciò che diventerà il futurismo letterario a partire dal 1912, con la pubblicazione del Manifesto tecnico della letteratura futurista e l'invenzione delle parole in libertà. Nelle opere precedenti, a un sostanziale disinteresse per il progressismo tecnologico corrispondeva un'attenzione ossessiva per la corporeità e un ricorso continuo all'allegoria, con effetti particolarmente grotteschi (soprattutto nel romanzo Mafarka le futuriste) nei quali si rilevano tracce di una concezione del mondo di sapore ancora medioevo-rinascimentale. Questa componente regressiva del futurismo marinettiano viene platealmente abbandonata a partire dal 1912, con Zang Tumb Tumb, salvo riaffiorare ciclicamente, come una corrente sotterranea, in altre fasi della sua carriera: nel 1922, ad esempio, con la pubblicazione de Gli indomabili (un’altra opera allegorica, ricca di reminiscenze letterarie). Quella del 1912 è una vera e propria frattura, che nel primo capitolo è indagata sia da un punto di vista storico (attraverso la documentazione epistolare e giornalistica vengono portate alla luce le tensioni che portarono gran parte dei poeti futuristi ad abbandonare il movimento proprio in quell'anno) che da un punto di vista linguistico: sono sottolineate le differenze sostanziali tra la produzione parolibera e quella precedente, e si arrischia anche una spiegazione psicologica della brusca svolta impressa da Marinetti al suo movimento. Nel secondo capitolo viene proposta un'analisi formale e contenutistica della ‘funzione grottesca’ nelle opere di Marinetti. Nel terzo capitolo un'analisi comparata delle incarnazioni della macchine ritratte nelle opere di Marinetti ci svela che quasi sempre in questo autore la macchina è associata al pensiero della morte e a una pulsione masochistica (dominante, quest'ultima, ne Gli indomabili); il che porta ad arrischiare l'ipotesi che l'esperienza futurista, e in particolare il futurismo parolibero posteriore al 1912, sia la rielaborazione di un trauma. Esso può essere interpretato metaforicamente come lo choc del giovane Marinetti, balzato in pochi anni dalle sabbie d'Alessandria d'Egitto alle brume industriali di Milano, ma anche come una reale esperienza traumatica (l'incidente automobilistico del 1908, “mitologizzato” nel primo manifesto, ma che in realtà fu vissuto dall'autore come esperienza realmente perturbante).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La ricerca si propone di definire le linee guida per la stesura di un Piano che si occupi di qualità della vita e di benessere. Il richiamo alla qualità e al benessere è positivamente innovativo, in quanto impone agli organi decisionali di sintonizzarsi con la soggettività attiva dei cittadini e, contemporaneamente, rende evidente la necessità di un approccio più ampio e trasversale al tema della città e di una più stretta relazione dei tecnici/esperti con i responsabili degli organismi politicoamministrativi. La ricerca vuole indagare i limiti dell’urbanistica moderna di fronte alla complessità di bisogni e di nuove necessità espresse dalle popolazioni urbane contemporanee. La domanda dei servizi è notevolmente cambiata rispetto a quella degli anni Sessanta, oltre che sul piano quantitativo anche e soprattutto sul piano qualitativo, a causa degli intervenuti cambiamenti sociali che hanno trasformato la città moderna non solo dal punto di vista strutturale ma anche dal punto di vista culturale: l’intermittenza della cittadinanza, per cui le città sono sempre più vissute e godute da cittadini del mondo (turisti e/o visitatori, temporaneamente presenti) e da cittadini diffusi (suburbani, provinciali, metropolitani); la radicale trasformazione della struttura familiare, per cui la famiglia-tipo costituita da una coppia con figli, solido riferimento per l’economia e la politica, è oggi minoritaria; l’irregolarità e flessibilità dei calendari, delle agende e dei ritmi di vita della popolazione attiva; la mobilità sociale, per cui gli individui hanno traiettorie di vita e pratiche quotidiane meno determinate dalle loro origini sociali di quanto avveniva nel passato; l’elevazione del livello di istruzione e quindi l’incremento della domanda di cultura; la crescita della popolazione anziana e la forte individualizzazione sociale hanno generato una domanda di città espressa dalla gente estremamente variegata ed eterogenea, frammentata e volatile, e per alcuni aspetti assolutamente nuova. Accanto a vecchie e consolidate richieste – la città efficiente, funzionale, produttiva, accessibile a tutti – sorgono nuove domande, ideali e bisogni che hanno come oggetto la bellezza, la varietà, la fruibilità, la sicurezza, la capacità di stupire e divertire, la sostenibilità, la ricerca di nuove identità, domande che esprimono il desiderio di vivere e di godere la città, di stare bene in città, domande che non possono essere più soddisfatte attraverso un’idea di welfare semplicemente basata sull’istruzione, la sanità, il sistema pensionistico e l’assistenza sociale. La città moderna ovvero l’idea moderna della città, organizzata solo sui concetti di ordine, regolarità, pulizia, uguaglianza e buon governo, è stata consegnata alla storia passata trasformandosi ora in qualcosa di assai diverso che facciamo fatica a rappresentare, a descrivere, a raccontare. La città contemporanea può essere rappresentata in molteplici modi, sia dal punto di vista urbanistico che dal punto di vista sociale: nella letteratura recente è evidente la difficoltà di definire e di racchiudere entro limiti certi l’oggetto “città” e la mancanza di un convincimento forte nell’interpretazione delle trasformazioni politiche, economiche e sociali che hanno investito la società e il mondo nel secolo scorso. La città contemporanea, al di là degli ambiti amministrativi, delle espansioni territoriali e degli assetti urbanistici, delle infrastrutture, della tecnologia, del funzionalismo e dei mercati globali, è anche luogo delle relazioni umane, rappresentazione dei rapporti tra gli individui e dello spazio urbano in cui queste relazioni si muovono. La città è sia concentrazione fisica di persone e di edifici, ma anche varietà di usi e di gruppi, densità di rapporti sociali; è il luogo in cui avvengono i processi di coesione o di esclusione sociale, luogo delle norme culturali che regolano i comportamenti, dell’identità che si esprime materialmente e simbolicamente nello spazio pubblico della vita cittadina. Per studiare la città contemporanea è necessario utilizzare un approccio nuovo, fatto di contaminazioni e saperi trasversali forniti da altre discipline, come la sociologia e le scienze umane, che pure contribuiscono a costruire l’immagine comunemente percepita della città e del territorio, del paesaggio e dell’ambiente. La rappresentazione del sociale urbano varia in base all’idea di cosa è, in un dato momento storico e in un dato contesto, una situazione di benessere delle persone. L’urbanistica moderna mirava al massimo benessere del singolo e della collettività e a modellarsi sulle “effettive necessità delle persone”: nei vecchi manuali di urbanistica compare come appendice al piano regolatore il “Piano dei servizi”, che comprende i servizi distribuiti sul territorio circostante, una sorta di “piano regolatore sociale”, per evitare quartieri separati per fasce di popolazione o per classi. Nella città contemporanea la globalizzazione, le nuove forme di marginalizzazione e di esclusione, l’avvento della cosiddetta “new economy”, la ridefinizione della base produttiva e del mercato del lavoro urbani sono espressione di una complessità sociale che può essere definita sulla base delle transazioni e gli scambi simbolici piuttosto che sui processi di industrializzazione e di modernizzazione verso cui era orientata la città storica, definita moderna. Tutto ciò costituisce quel complesso di questioni che attualmente viene definito “nuovo welfare”, in contrapposizione a quello essenzialmente basato sull’istruzione, sulla sanità, sul sistema pensionistico e sull’assistenza sociale. La ricerca ha quindi analizzato gli strumenti tradizionali della pianificazione e programmazione territoriale, nella loro dimensione operativa e istituzionale: la destinazione principale di tali strumenti consiste nella classificazione e nella sistemazione dei servizi e dei contenitori urbanistici. E’ chiaro, tuttavia, che per poter rispondere alla molteplice complessità di domande, bisogni e desideri espressi dalla società contemporanea le dotazioni effettive per “fare città” devono necessariamente superare i concetti di “standard” e di “zonizzazione”, che risultano essere troppo rigidi e quindi incapaci di adattarsi all’evoluzione di una domanda crescente di qualità e di servizi e allo stesso tempo inadeguati nella gestione del rapporto tra lo spazio domestico e lo spazio collettivo. In questo senso è rilevante il rapporto tra le tipologie abitative e la morfologia urbana e quindi anche l’ambiente intorno alla casa, che stabilisce il rapporto “dalla casa alla città”, perché è in questa dualità che si definisce il rapporto tra spazi privati e spazi pubblici e si contestualizzano i temi della strada, dei negozi, dei luoghi di incontro, degli accessi. Dopo la convergenza dalla scala urbana alla scala edilizia si passa quindi dalla scala edilizia a quella urbana, dal momento che il criterio del benessere attraversa le diverse scale dello spazio abitabile. Non solo, nei sistemi territoriali in cui si è raggiunto un benessere diffuso ed un alto livello di sviluppo economico è emersa la consapevolezza che il concetto stesso di benessere sia non più legato esclusivamente alla capacità di reddito collettiva e/o individuale: oggi la qualità della vita si misura in termini di qualità ambientale e sociale. Ecco dunque la necessità di uno strumento di conoscenza della città contemporanea, da allegare al Piano, in cui vengano definiti i criteri da osservare nella progettazione dello spazio urbano al fine di determinare la qualità e il benessere dell’ambiente costruito, inteso come benessere generalizzato, nel suo significato di “qualità dello star bene”. E’ evidente che per raggiungere tale livello di qualità e benessere è necessario provvedere al soddisfacimento da una parte degli aspetti macroscopici del funzionamento sociale e del tenore di vita attraverso gli indicatori di reddito, occupazione, povertà, criminalità, abitazione, istruzione, etc.; dall’altra dei bisogni primari, elementari e di base, e di quelli secondari, culturali e quindi mutevoli, trapassando dal welfare state allo star bene o well being personale, alla wellness in senso olistico, tutte espressioni di un desiderio di bellezza mentale e fisica e di un nuovo rapporto del corpo con l’ambiente, quindi manifestazione concreta di un’esigenza di ben-essere individuale e collettivo. Ed è questa esigenza, nuova e difficile, che crea la diffusa sensazione dell’inizio di una nuova stagione urbana, molto più di quanto facciano pensare le stesse modifiche fisiche della città.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study aims at analysing Brian O'Nolans literary production in the light of a reconsideration of the role played by his two most famous pseudonyms ,Flann Brien and Myles na Gopaleen, behind which he was active both as a novelist and as a journalist. We tried to establish a new kind of relationship between them and their empirical author following recent cultural and scientific surveys in the field of Humour Studies, Psychology, and Sociology: taking as a starting point the appreciation of the comic attitude in nature and in cultural history, we progressed through a short history of laughter and derision, followed by an overview on humour theories. After having established such a frame, we considered an integration of scientific studies in the field of laughter and humour as a base for our study scheme, in order to come to a definition of the comic author as a recognised, powerful and authoritative social figure who acts as a critic of conventions. The history of laughter and comic we briefly summarized, based on the one related by the French scholar Georges Minois in his work (Minois 2004), has been taken into account in the view that humorous attitude is one of man’s characteristic traits always present and witnessed throughout the ages, though subject in most cases to repression by cultural and political conservative power. This sort of Super-Ego notwithstanding, or perhaps because of that, comic impulse proved irreducible exactly in its influence on the current cultural debates. Basing mainly on Robert R. Provine’s (Provine 2001), Fabio Ceccarelli’s (Ceccarelli 1988), Arthur Koestler’s (Koestler 1975) and Peter L. Berger’s (Berger 1995) scientific essays on the actual occurrence of laughter and smile in complex social situations, we underlined the many evidences for how the use of comic, humour and wit (in a Freudian sense) could be best comprehended if seen as a common mind process designed for the improvement of knowledge, in which we traced a strict relation with the play-element the Dutch historian Huizinga highlighted in his famous essay, Homo Ludens (Huizinga 1955). We considered comic and humour/wit as different sides of the same coin, and showed how the demonstrations scientists provided on this particular subject are not conclusive, given that the mental processes could not still be irrefutably shown to be separated as regards graduations in comic expression and reception: in fact, different outputs in expressions might lead back to one and the same production process, following the general ‘Economy Rule’ of evolution; man is the only animal who lies, meaning with this that one feeling is not necessarily biuniquely associated with one and the same outward display, so human expressions are not validation proofs for feelings. Considering societies, we found that in nature they are all organized in more or less the same way, that is, in élites who govern over a community who, in turn, recognizes them as legitimate delegates for that task; we inferred from this the epistemological possibility for the existence of an added ruling figure alongside those political and religious: this figure being the comic, who is the person in charge of expressing true feelings towards given subjects of contention. Any community owns one, and his very peculiar status is validated by the fact that his place is within the community, living in it and speaking to it, but at the same time is outside it in the sense that his action focuses mainly on shedding light on ideas and objects placed out-side the boundaries of social convention: taboos, fears, sacred objects and finally culture are the favourite targets of the comic person’s arrow. This is the reason for the word a(rche)typical as applied to the comic figure in society: atypical in a sense, because unconventional and disrespectful of traditions, critical and never at ease with unblinkered respect of canons; archetypical, because the “village fool”, buffoon, jester or anyone in any kind of society who plays such roles, is an archetype in the Jungian sense, i.e. a personification of an irreducible side of human nature that everybody instinctively knows: a beginner of a tradition, the perfect type, what is most conventional of all and therefore the exact opposite of an atypical. There is an intrinsic necessity, we think, of such figures in societies, just like politicians and priests, who should play an elitist role in order to guide and rule not for their own benefit but for the good of the community. We are not naïve and do know that actual owners of power always tend to keep it indefinitely: the ‘social comic’ as a role of power has nonetheless the distinctive feature of being the only job whose tension is not towards stability. It has got in itself the rewarding permission of contradiction, for the very reason we exposed before that the comic must cast an eye both inside and outside society and his vision may be perforce not consistent, then it is satisfactory for the popularity that gives amongst readers and audience. Finally, the difference between governors, priests and comic figures is the seriousness of the first two (fundamentally monologic) and the merry contradiction of the third (essentially dialogic). MPs, mayors, bishops and pastors should always console, comfort and soothe popular mood in respect of the public convention; the comic has the opposite task of provoking, urging and irritating, accomplishing at the same time a sort of control of the soothing powers of society, keepers of the righteousness. In this view, the comic person assumes a paramount importance in the counterbalancing of power administration, whether in form of acting in public places or in written pieces which could circulate for private reading. At this point comes into question our Irish writer Brian O'Nolan(1911-1966), real name that stood behind the more famous masks of Flann O'Brien, novelist, author of At Swim-Two-Birds (1939), The Hard Life (1961), The Dalkey Archive (1964) and, posthumously, The Third Policeman (1967); and of Myles na Gopaleen, journalist, keeper for more than 25 years of the Cruiskeen Lawn column on The Irish Times (1940-1966), and author of the famous book-parody in Irish An Béal Bocht (1941), later translated in English as The Poor Mouth (1973). Brian O'Nolan, professional senior civil servant of the Republic, has never seen recognized his authorship in literary studies, since all of them concentrated on his alter egos Flann, Myles and some others he used for minor contributions. So far as we are concerned, we think this is the first study which places the real name in the title, this way acknowledging him an unity of intents that no-one before did. And this choice in titling is not a mere mark of distinction for the sake of it, but also a wilful sign of how his opus should now be reconsidered. In effect, the aim of this study is exactly that of demonstrating how the empirical author Brian O'Nolan was the real Deus in machina, the master of puppets who skilfully directed all of his identities in planned directions, so as to completely fulfil the role of the comic figure we explained before. Flann O'Brien and Myles na Gopaleen were personae and not persons, but the impression one gets from the critical studies on them is the exact opposite. Literary consideration, that came only after O'Nolans death, began with Anne Clissmann’s work, Flann O'Brien: A Critical Introduction to His Writings (Clissmann 1975), while the most recent book is Keith Donohue’s The Irish Anatomist: A Study of Flann O'Brien (Donohue 2002); passing through M.Keith Booker’s Flann O'Brien, Bakhtin and Menippean Satire (Booker 1995), Keith Hopper’s Flann O'Brien: A Portrait of the Artist as a Young Post-Modernist (Hopper 1995) and Monique Gallagher’s Flann O'Brien, Myles et les autres (Gallagher 1998). There have also been a couple of biographies, which incidentally somehow try to explain critical points his literary production, while many critical studies do the same on the opposite side, trying to found critical points of view on the author’s restless life and habits. At this stage, we attempted to merge into O'Nolan's corpus the journalistic articles he wrote, more than 4,200, for roughly two million words in the 26-year-old running of the column. To justify this, we appealed to several considerations about the figure O'Nolan used as writer: Myles na Gopaleen (later simplified in na Gopaleen), who was the equivalent of the street artist or storyteller, speaking to his imaginary public and trying to involve it in his stories, quarrels and debates of all kinds. First of all, he relied much on language for the reactions he would obtain, playing on, and with, words so as to ironically unmask untrue relationships between words and things. Secondly, he pushed to the limit the convention of addressing to spectators and listeners usually employed in live performing, stretching its role in the written discourse to come to a greater effect of involvement of readers. Lastly, he profited much from what we labelled his “specific weight”, i.e. the potential influence in society given by his recognised authority in determined matters, a position from which he could launch deeper attacks on conventional beliefs, so complying with the duty of a comic we hypothesised before: that of criticising society even in threat of losing the benefits the post guarantees. That seemingly masochistic tendency has its rationale. Every representative has many privileges on the assumption that he, or she, has great responsibilities in administrating. The higher those responsibilities are, the higher is the reward but also the severer is the punishment for the misfits done while in charge. But we all know that not everybody accepts the rules and many try to use their power for their personal benefit and do not want to undergo law’s penalties. The comic, showing in this case more civic sense than others, helped very much in this by the non-accessibility to the use of public force, finds in the role of the scapegoat the right accomplishment of his task, accepting the punishment when his breaking of the conventions is too stark to be forgiven. As Ceccarelli demonstrated, the role of the object of laughter (comic, ridicule) has its very own positive side: there is freedom of expression for the person, and at the same time integration in the society, even though at low levels. Then the banishment of a ‘social’ comic can never get to total extirpation from society, revealing how the scope of the comic lies on an entirely fictional layer, bearing no relation with facts, nor real consequences in terms of physical health. Myles na Gopaleen, mastering these three characteristics we postulated in the highest way, can be considered an author worth noting; and the oeuvre he wrote, the whole collection of Cruiskeen Lawn articles, is rightfully a novel because respects the canons of it especially regarding the authorial figure and his relationship with the readers. In addition, his work can be studied even if we cannot conduct our research on the whole of it, this proceeding being justified exactly because of the resemblances to the real figure of the storyteller: its ‘chapters’ —the daily articles— had a format that even the distracted reader could follow, even one who did not read each and every article before. So we can critically consider also a good part of them, as collected in the seven volumes published so far, with the addition of some others outside the collections, because completeness in this case is not at all a guarantee of a better precision in the assessment; on the contrary: examination of the totality of articles might let us consider him as a person and not a persona. Once cleared these points, we proceeded further in considering tout court the works of Brian O'Nolan as the works of a unique author, rather than complicating the references with many names which are none other than well-wrought sides of the same personality. By putting O'Nolan as the correct object of our research, empirical author of the works of the personae Flann O'Brien and Myles na Gopaleen, there comes out a clearer literary landscape: the comic author Brian O'Nolan, self-conscious of his paramount role in society as both a guide and a scourge, in a word as an a(rche)typical, intentionally chose to differentiate his personalities so as to create different perspectives in different fields of knowledge by using, in addition, different means of communication: novels and journalism. We finally compared the newly assessed author Brian O'Nolan with other great Irish comic writers in English, such as James Joyce (the one everybody named as the master in the field), Samuel Beckett, and Jonathan Swift. This comparison showed once more how O'Nolan is in no way inferior to these authors who, greatly celebrated by critics, have nonetheless failed to achieve that great public recognition O’Nolan received alias Myles, awarded by the daily audience he reached and influenced with his Cruiskeen Lawn column. For this reason, we believe him to be representative of the comic figure’s function as a social regulator and as a builder of solidarity, such as that Raymond Williams spoke of in his work (Williams 1982), with in mind the aim of building a ‘culture in common’. There is no way for a ‘culture in common’ to be acquired if we do not accept the fact that even the most functional society rests on conventions, and in a world more and more ‘connected’ we need someone to help everybody negotiate with different cultures and persons. The comic gives us a worldly perspective which is at the same time comfortable and distressing but in the end not harmful as the one furnished by politicians could be: he lets us peep into parallel worlds without moving too far from our armchair and, as a consequence, is the one who does his best for the improvement of our understanding of things.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Today’s pet food industry is growing rapidly, with pet owners demanding high-quality diets for their pets. The primary role of diet is to provide enough nutrients to meet metabolic requirements, while giving the consumer a feeling of well-being. Diet nutrient composition and digestibility are of crucial importance for health and well being of animals. A recent strategy to improve the quality of food is the use of “nutraceuticals” or “Functional foods”. At the moment, probiotics and prebiotics are among the most studied and frequently used functional food compounds in pet foods. The present thesis reported results from three different studies. The first study aimed to develop a simple laboratory method to predict pet foods digestibility. The developed method was based on the two-step multi-enzymatic incubation assay described by Vervaeke et al. (1989), with some modification in order to better represent the digestive physiology of dogs. A trial was then conducted to compare in vivo digestibility of pet-foods and in vitro digestibility using the newly developed method. Correlation coefficients showed a close correlation between digestibility data of total dry matter and crude protein obtained with in vivo and in vitro methods (0.9976 and 0.9957, respectively). Ether extract presented a lower correlation coefficient, although close to 1 (0.9098). Based on the present results, the new method could be considered as an alternative system of evaluation of dog foods digestibility, reducing the need for using experimental animals in digestibility trials. The second parte of the study aimed to isolate from dog faeces a Lactobacillus strain capable of exert a probiotic effect on dog intestinal microflora. A L. animalis strain was isolated from the faeces of 17 adult healthy dogs..The isolated strain was first studied in vitro when it was added to a canine faecal inoculum (at a final concentration of 6 Log CFU/mL) that was incubated in anaerobic serum bottles and syringes which simulated the large intestine of dogs. Samples of fermentation fluid were collected at 0, 4, 8, and 24 hours for analysis (ammonia, SCFA, pH, lactobacilli, enterococci, coliforms, clostridia). Consequently, the L. animalis strain was fed to nine dogs having lactobacilli counts lower than 4.5 Log CFU per g of faeces. The study indicated that the L animalis strain was able to survive gastrointestinal passage and transitorily colonize the dog intestine. Both in vitro and in vivo results showed that the L. animalis strain positively influenced composition and metabolism of the intestinal microflora of dogs. The third trail investigated in vitro the effects of several non-digestible oligosaccharides (NDO) on dog intestinal microflora composition and metabolism. Substrates were fermented using a canine faecal inoculum that was incubated in anaerobic serum bottles and syringes. Substrates were added at the final concentration of 1g/L (inulin, FOS, pectin, lactitol, gluconic acid) or 4g/L (chicory). Samples of fermentation fluid were collected at 0, 6, and 24 hours for analysis (ammonia, SCFA, pH, lactobacilli, enterococci, coliforms). Gas production was measured throughout the 24 h of the study. Among the tested NDO lactitol showed the best prebiotic properties. In fact, it reduced coliforms and increased lactobacilli counts, enhanced microbial fermentation and promoted the production of SCFA while decreasing BCFA. All the substrates that were investigated showed one or more positive effects on dog faecal microflora metabolism or composition. Further studies (in particular in vivo studies with dogs) will be needed to confirm the prebiotic properties of lactitol and evaluate its optimal level of inclusion in the diet.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For their survival, humans and animals can rely on motivational systems which are specialized in assessing the valence and imminence of dangers and appetitive cues. The Orienting Response (OR) is a fundamental response pattern that an organism executes whenever a novel or significant stimulus is detected, and has been shown to be consistently modulated by the affective value of a stimulus. However, detecting threatening stimuli and appetitive affordances while they are far away compared to when they are within reach constitutes an obvious evolutionary advantage. Building on the linear relationship between stimulus distance and retinal size, the present research was aimed at investigating the extent to which emotional modulation of distinct processes (action preparation, attentional capture, and subjective emotional state) is affected when reducing the retinal size of a picture. Studies 1-3 examined the effects of picture size on emotional response. Subjective feeling of engagement, as well as sympathetic activation, were modulated by picture size, suggesting that action preparation and subjective experience reflect the combined effects of detecting an arousing stimulus and assessing its imminence. On the other hand, physiological responses which are thought to reflect the amount of attentional resources invested in stimulus processing did not vary with picture size. Studies 4-6 were conducted to substantiate and extend the results of studies 1-3. In particular, it was noted that a decrease in picture size is associated with a loss in the low spatial frequencies of a picture, which might confound the interpretation of the results of studies 1-3. Therefore, emotional and neutral images which were either low-pass filtered or reduced in size were presented, and affective responses were measured. Most effects which were observed when manipulating image size were replicated by blurring pictures. However, pictures depicting highly arousing unpleasant contents were associated with a more pronounced decrease in affective modulation when pictures were reduced in size compared to when they were blurred. The present results provide important information for the study of processes involved in picture perception and in the genesis and expression of an emotional response. In particular, the availability of high spatial frequencies might affect the degree of activation of an internal representation of an affectively charged scene, and might modulate subjective emotional state and preparation for action. Moreover, the manipulation of stimulus imminence revealed important effects of stimulus engagement on specific components of the emotional response, and the implications of the present data for some models of emotions have been discussed. In particular, within the framework of a staged model of emotional response, the tactic and strategic role of response preparation and attention allocation to stimuli varying in engaging power has been discussed, considering the adaptive advantages that each might represent in an evolutionary view. Finally, the identification of perceptual parameters that allow affective processing to be carried out has important methodological applications in future studies examining emotional response in basic research or clinical contexts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

«Fiction of frontier». Phenomenology of an open form/voice. Francesco Giustini’s PhD dissertation fits into a genre of research usually neglected by the literary criticism which nevertheless is arousing much interest in recent years: the relationship between Literature and Space. In this context, the specific issue of his work consists in the category of the Frontier including its several implications for the XX century fiction. The preliminary step, at the beginning of the first section of the dissertation, is a semantic analysis: with precision, Giustini describes the meaning of the word “frontier” here declined in a multiplicity of cultural, political and geographical contexts, starting from the American frontier of the pioneers who headed for the West, to the exotic frontiers of the world, with whose the imperialistic colonization has come into contact; from the semi-uninhabited areas like deserts, highlands and virgin forests, to the ethnic frontiers between Indian and white people in South America, since the internal frontiers of the Countries like those ones between the District and the Capital City, the Centre and the Outskirts. In the next step, Giustini wants to focus on a real “ myth of the frontier”, able to nourish cultural and literary imagination. Indeed, the literature has told and chosen the frontier as the scenery for many stories; especially in the 20th Century it made the frontier a problematic space in the light of events and changes that have transformed the perception of space and our relationship with it. Therefore, the dissertation proposes a critical category, it traces the hallmarks of a specific literary phenomenon defined “ Fiction of the frontier” ,present in many literary traditions during the 20th Century. The term “Fiction” (not “Literature” or “Poetics”) does not define a genre but rather a “procedure”, focusing on a constant issue pointed out from the texts examined in this work : the strong call to the act of narration and to its oral traditions. The “Fiction of the Frontier” is perceived as an approach to the world, a way of watching and feeling the objects, an emotion that is lived and told through the story- a story where the narrator ,through his body and his voice, takes the rule of the witness. The following parts, that have an analytic style, are constructed on the basis of this theoretical and methodological reflection. The second section gives a wide range of examples into we can find the figure and the myth of the frontier through the textual analysis which range over several literary traditions. Starting from monographic chapters (Garcia Marquez, Callado, McCarthy), to the comparative reading of couples of texts (Calvino and Verga Llosa, Buzzati and Coetzee, Arguedas and Rulfo). The selection of texts is introduced so as to underline a particular aspect or a form of the frontier at every reading. This section is articulated into thematic voices which recall some actions that can be taken into the ambiguous and liminal space of the frontier (to communicate, to wait, to “trans-culturate”, to imagine, to live in, to not-live in). In this phenomenology, the frontier comes to the light as a physical and concrete element or as a cultural, imaginary, linguistic, ethnic and existential category. In the end, the third section is centered on a more defined and elaborated analysis of two authors, considered as fundamental for the comprehension of the “Fiction of the frontier”: Joseph Conrad and João Guimarães Rosa. Even if they are very different, being part of unlike literary traditions, these two authors show many connections which are pointed by the comparative analysis. Maybe Conrad is the first author that understand the feeling of the frontier , freeing himself from the adventure romance and from the exotic nineteenthcentury tradition. João Guimarães Rosa, in his turn, is the great narrator of Brazilian and South American frontier, he is the man of sertão and of endless spaces of the Centre of Brazil. His production is strongly linked to that one belonged to the author of Heart of Darkness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this proposal is to offer an alternative perspective on the study of Cold War, since insufficient attention is usually paid to those organizations that mobilized against the development and proliferation of nuclear weapons. The antinuclear movement began to mobilize between the 1950s and the 1960s, when it finally gained the attention of public opinion, and helped to build a sort of global conscience about nuclear bombs. This was due to the activism of a significant part of the international scientific community, which offered powerful intellectual and political legitimization to the struggle, and to the combined actions of the scientific and organized protests. This antinuclear conscience is something we usually tend to consider as a fait accompli in contemporary world, but the question is to show its roots, and the way it influenced statesmen and political choices during the period of nuclear confrontation of the early Cold War. To understand what this conscience could be and how it should be defined, we have to look at the very meaning of the nuclear weapons that has deeply modified the sense of war. Nuclear weapons seemed to be able to destroy human beings everywhere with no realistic forms of control of the damages they could set off, and they represented the last resource in the wide range of means of mass destruction. Even if we tend to consider this idea fully rational and incontrovertible, it was not immediately born with the birth of nuclear weapons themselves. Or, better, not everyone in the world did immediately share it. Due to the particular climate of Cold War confrontation, deeply influenced by the persistence of realistic paradigms in international relations, British and U.S. governments looked at nuclear weapons simply as «a bullet». From the Trinity Test to the signature of the Limited Test Ban Treaty in 1963, many things happened that helped to shift this view upon nuclear weapons. First of all, more than ten years of scientific protests provided a more concerned knowledge about consequences of nuclear tests and about the use of nuclear weapons. Many scientists devoted their social activities to inform public opinion and policy-makers about the real significance of the power of the atom and the related danger for human beings. Secondly, some public figures, as physicists, philosophers, biologists, chemists, and so on, appealed directly to the human community to «leave the folly and face reality», publicly sponsoring the antinuclear conscience. Then, several organizations leaded by political, religious or radical individuals gave to this protests a formal structure. The Campaign for Nuclear Disarmament in Great Britain, as well as the National Committee for a Sane Nuclear Policy in the U.S., represented the voice of the masses against the attempts of governments to present nuclear arsenals as a fundamental part of the international equilibrium. Therefore, the antinuclear conscience could be defined as an opposite feeling to the development and the use of nuclear weapons, able to create a political issue oriented to the influence of military and foreign policies. Only taking into consideration the strength of this pressure, it seems possible to understand not only the beginning of nuclear negotiations, but also the reasons that permitted Cold War to remain cold.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Flicker is a power quality phenomenon that applies to cycle instability of light intensity resulting from supply voltage fluctuation, which, in turn can be caused by disturbances introduced during power generation, transmission or distribution. The standard EN 61000-4-15 which has been recently adopted also by the IEEE as IEEE Standard 1453 relies on the analysis of the supply voltage which is processed according to a suitable model of the lamp – human eye – brain chain. As for the lamp, an incandescent 60 W, 230 V, 50 Hz source is assumed. As far as the human eye – brain model is concerned, it is represented by the so-called flicker curve. Such a curve was determined several years ago by statistically analyzing the results of tests where people were subjected to flicker with different combinations of magnitude and frequency. The limitations of this standard approach to flicker evaluation are essentially two. First, the provided index of annoyance Pst can be related to an actual tiredness of the human visual system only if such an incandescent lamp is used. Moreover, the implemented response to flicker is “subjective” given that it relies on the people answers about their feelings. In the last 15 years, many scientific contributions have tackled these issues by investigating the possibility to develop a novel model of the eye-brain response to flicker and overcome the strict dependence of the standard on the kind of the light source. In this light of fact, this thesis is aimed at presenting an important contribution for a new Flickermeter. An improved visual system model using a physiological parameter that is the mean value of the pupil diameter, has been presented, thus allowing to get a more “objective” representation of the response to flicker. The system used to both generate flicker and measure the pupil diameter has been illustrated along with all the results of several experiments performed on the volunteers. The intent has been to demonstrate that the measurement of that geometrical parameter can give reliable information about the feeling of the human visual system to light flicker.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A main objective of the human movement analysis is the quantitative description of joint kinematics and kinetics. This information may have great possibility to address clinical problems both in orthopaedics and motor rehabilitation. Previous studies have shown that the assessment of kinematics and kinetics from stereophotogrammetric data necessitates a setup phase, special equipment and expertise to operate. Besides, this procedure may cause feeling of uneasiness on the subjects and may hinder with their walking. The general aim of this thesis is the implementation and evaluation of new 2D markerless techniques, in order to contribute to the development of an alternative technique to the traditional stereophotogrammetric techniques. At first, the focus of the study has been the estimation of the ankle-foot complex kinematics during stance phase of the gait. Two particular cases were considered: subjects barefoot and subjects wearing ankle socks. The use of socks was investigated in view of the development of the hybrid method proposed in this work. Different algorithms were analyzed, evaluated and implemented in order to have a 2D markerless solution to estimate the kinematics for both cases. The validation of the proposed technique was done with a traditional stereophotogrammetric system. The implementation of the technique leads towards an easy to configure (and more comfortable for the subject) alternative to the traditional stereophotogrammetric system. Then, the abovementioned technique has been improved so that the measurement of knee flexion/extension could be done with a 2D markerless technique. The main changes on the implementation were on occlusion handling and background segmentation. With the additional constraints, the proposed technique was applied to the estimation of knee flexion/extension and compared with a traditional stereophotogrammetric system. Results showed that the knee flexion/extension estimation from traditional stereophotogrammetric system and the proposed markerless system were highly comparable, making the latter a potential alternative for clinical use. A contribution has also been given in the estimation of lower limb kinematics of the children with cerebral palsy (CP). For this purpose, a hybrid technique, which uses high-cut underwear and ankle socks as “segmental markers” in combination with a markerless methodology, was proposed. The proposed hybrid technique is different than the abovementioned markerless technique in terms of the algorithm chosen. Results showed that the proposed hybrid technique can become a simple and low-cost alternative to the traditional stereophotogrammetric systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ever increasing demand for new services from users who want high-quality broadband services while on the move, is straining the efficiency of current spectrum allocation paradigms, leading to an overall feeling of spectrum scarcity. In order to circumvent this problem, two possible solutions are being investigated: (i) implementing new technologies capable of accessing the temporarily/locally unused bands, without interfering with the licensed services, like Cognitive Radios; (ii) release some spectrum bands thanks to new services providing higher spectral efficiency, e.g., DVB-T, and allocate them to new wireless systems. These two approaches are promising, but also pose novel coexistence and interference management challenges to deal with. In particular, the deployment of devices such as Cognitive Radio, characterized by the inherent unplanned, irregular and random locations of the network nodes, require advanced mathematical techniques in order to explicitly model their spatial distribution. In such context, the system performance and optimization are strongly dependent on this spatial configuration. On the other hand, allocating some released spectrum bands to other wireless services poses severe coexistence issues with all the pre-existing services on the same or adjacent spectrum bands. In this thesis, these methodologies for better spectrum usage are investigated. In particular, using Stochastic Geometry theory, a novel mathematical framework is introduced for cognitive networks, providing a closed-form expression for coverage probability and a single-integral form for average downlink rate and Average Symbol Error Probability. Then, focusing on more regulatory aspects, interference challenges between DVB-T and LTE systems are analysed proposing a versatile methodology for their proper coexistence. Moreover, the studies performed inside the CEPT SE43 working group on the amount of spectrum potentially available to Cognitive Radios and an analysis of the Hidden Node problem are provided. Finally, a study on the extension of cognitive technologies to Hybrid Satellite Terrestrial Systems is proposed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis examines the literature on local home bias, i.e. investor preference towards geographically nearby stocks, and investigates the role of firm’s visibility, profitability, and opacity in explaining such behavior. While firm’s visibility is expected to proxy for the behavioral root originating such a preference, firm’s profitability and opacity are expected to capture the informational one. I find that less visible, and more profitable and opaque firms, conditionally to the demand, benefit from being headquartered in regions characterized by a scarcity of listed firms (local supply of stocks). Specifically, research estimates suggest that firms headquartered in regions with a poor supply of stocks would be worth i) 11 percent more if non-visible, non-profitable and non-opaque; ii) 16 percent more if profitable; and iii) 28 percent more if both profitable and opaque. Overall, as these features are able to explain most, albeit not all, of the local home bias effect, I reasonably argue and then assess that most of the preference for local is determined by a successful attempt to exploit local information advantage (60 percent), while the rest is determined by a mere (irrational) feeling of familiarity with the local firm (40 percent). Several and significant methodological, theoretical, and practical implications come out.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La ricerca muove dal presupposto che l’opera di Aldo Rossi sia stata analizzata finora secondo un criterio tipologico. Tale approccio è una tra le possibili chiavi di lettura del lavoro dell’architetto. Nel tentativo di individuare un’interpretazione dell’opera di Rossi legata a sistemi immutabili nel tempo si è ritenuto necessario approfondire la relazione che si stabilisce tra la sua opera e il suolo. Attraverso la definizione di due categorie di lettura dei progetti dell’autore, che si basano su continuità o discontinuità fisica del progetto rispetto al suolo, si comprende come il rapporto tra area e progetto produca nel tempo soluzioni ricorrenti. In base a questa interpretazione muro e pilastro costituiscono due elementi fondamentali del linguaggio di Rossi. Essi a loro volta si allacciano ad un sistema di riferimento più ampio di cui tettonica e arte muraria sono i capisaldi. La ricerca si articola in tre parti, all’interno delle quali sono sviluppati specifici capitoli. La prima parte, sistema di riferimento, è necessaria a delineare un vocabolario utile per isolare il tema trattato. Essa è fondamentale per comprendere la posizione occupata da Rossi rispetto alle esperienze verificatesi nel corso della storia, relativamente al rapporto spazio - architettura - suolo. La seconda parte, arte muraria, serve a mettere in luce l’influenza che la componente massiva e plastica del terreno ha determinato nella definizione di specifiche soluzioni progettuali. La terza parte, tettonica, delinea invece un approccio opposto al precedente, individuando quei progetti in cui il rapporto col suolo è stato sminuito o addirittura negato, aumentando il senso di sospensione dei volumi nello spazio. In definitiva, l’influenza che il rapporto col suolo ha determinato sulle scelte progettuali di Rossi rappresenta l’interrogativo principale di questa ricerca.