929 resultados para shifting baselines.
Resumo:
Current variation aware design methodologies, tuned for worst-case scenarios, are becoming increasingly pessimistic from the perspective of power and performance. A good example of such pessimism is setting the refresh rate of DRAMs according to the worst-case access statistics, thereby resulting in very frequent refresh cycles, which are responsible for the majority of the standby power consumption of these memories. However, such a high refresh rate may not be required, either due to extremely low probability of the actual occurrence of such a worst-case, or due to the inherent error resilient nature of many applications that can tolerate a certain number of potential failures. In this paper, we exploit and quantify the possibilities that exist in dynamic memory design by shifting to the so-called approximate computing paradigm in order to save power and enhance yield at no cost. The statistical characteristics of the retention time in dynamic memories were revealed by studying a fabricated 2kb CMOS compatible embedded DRAM (eDRAM) memory array based on gain-cells. Measurements show that up to 73% of the retention power can be saved by altering the refresh time and setting it such that a small number of failures is allowed. We show that these savings can be further increased by utilizing known circuit techniques, such as body biasing, which can help, not only in extending, but also in preferably shaping the retention time distribution. Our approach is one of the first attempts to access the data integrity and energy tradeoffs achieved in eDRAMs for utilizing them in error resilient applications and can prove helpful in the anticipated shift to approximate computing.
Resumo:
We address the problem of mining interesting phrases from subsets of a text corpus where the subset is specified using a set of features such as keywords that form a query. Previous algorithms for the problem have proposed solutions that involve sifting through a phrase dictionary based index or a document-based index where the solution is linear in either the phrase dictionary size or the size of the document subset. We propose the usage of an independence assumption between query keywords given the top correlated phrases, wherein the pre-processing could be reduced to discovering phrases from among the top phrases per each feature in the query. We then outline an indexing mechanism where per-keyword phrase lists are stored either in disk or memory, so that popular aggregation algorithms such as No Random Access and Sort-merge Join may be adapted to do the scoring at real-time to identify the top interesting phrases. Though such an approach is expected to be approximate, we empirically illustrate that very high accuracies (of over 90%) are achieved against the results of exact algorithms. Due to the simplified list-aggregation, we are also able to provide response times that are orders of magnitude better than state-of-the-art algorithms. Interestingly, our disk-based approach outperforms the in-memory baselines by up to hundred times and sometimes more, confirming the superiority of the proposed method.
Resumo:
In his essay, Anti-Object, Kengo Kuma proposes that architecture cannot and should not be understood as object alone but instead always as series of networks and connections, relationships within space and through form. Some of these relationships are tangible, others are invisible. Stan Allen and James Corner have also called for an architecture that is more performative and operative – ‘less concerned with what buildings look like and more concerned with what they do’ – as means of effecting a more intimate and promiscuous relationship between infrastructure, urbanism and buildings. According to Allen this expanding filed offers a reclamation of some of the areas ceded by architecture following disciplinary specialization:
‘Territory, communication and speed are properly infrastructural problems and architecture as a discipline has developed specific technical means to deal with these variables. Mapping, projection, calculation, notation and visualization are among architecture’s traditional tools for operating at the very large scale’.
The motorway may not look like it – partly because we are no longer accustomed to think about it as such – but it is a site for and of architecture, a territory where architecture can be critical and active. If the limits of the discipline have narrowed, then one of the functions of a school of architecture must be an attempt occupy those areas of the built environment where architecture is no longer, or has yet to reach. If this is a project about reclamation of a landscape, it is also a challenge to some of the boundaries that surround architecture and often confine it, as Kuma suggests, to the appreciation of isolated objects.
M:NI 2014-15
We tend to think of the motorway as a thing or an object, something that has a singular function. Historically this is how it has been seen, with engineers designing bridges and embankments and suchlike with zeal … These objects like the M3 Urban Motorway, Belfast’s own Westway, are beautiful of course, but they have caused considerable damage to the city they were inflicted upon.
Actually, it’s the fact that we have seen the motorway as a solid object that has caused this problem. The motorway actually is a fluid and dynamic thing, and it should be seen as such: in fact it’s not an organ at all but actually tissue – something that connects rather than is. Once we start to see the motorway as tissue, it opens up new propositions about what the motorway is, is used for and does. This new dynamic and connective view unlocks the stasis of the motorway as edifice, and allows adaptation to happen: adaptation to old contexts that were ignored by the planners, and adaptation to new contexts that have arisen because of or in spite of our best efforts.
Motorways as tissue are more than just infrastructures: they are landscapes. These landscapes can be seen as surfaces on which flows take place, not only of cars, buses and lorries, but also of the globalized goods carried and the lifestyles and mobilities enabled. Here the infinite speed of urban change of thought transcends the declared speed limit [70 mph] of the motorway, in that a consignment of bananas can cause soil erosion in Equador, or the delivery of a new iphone can unlock connections and ideas the world over.
So what is this new landscape to be like? It may be a parallax-shifting, cognitive looking glass; a drone scape of energy transformation; a collective farm, or maybe part of a hospital. But what’s for sure, is that it is never fixed nor static: it pulses like a heartbeat through that most bland of landscapes, the countryside. It transmits forces like a Caribbean hurricane creating surf on an Atlantic Storm Beach: alien forces that mutate and re-form these places screaming into new, unclear and unintended futures.
And this future is clear: the future is urban. In this small rural country, motorways as tissue have made the whole of it: countryside, mountain, sea and town, into one singular, homogenous and hyper-connected, generic city.
Goodbye, place. Hello, surface!
Resumo:
In his essay, Anti-Object, Kengo Kuma proposes that architecture cannot and should not be understood as object alone but instead always as series of networks and connections, relationships within space and through form. Some of these relationships are tangible, others are invisible. Stan Allen and James Corner have also called for an architecture that is more performative and operative – ‘less concerned with what buildings look like and more concerned with what they do’ – as means of effecting a more intimate and promiscuous relationship between infrastructure, urbanism and buildings. According to Allen this expanding filed offers a reclamation of some of the areas ceded by architecture following disciplinary specialization:
‘Territory, communication and speed are properly infrastructural problems and architecture as a discipline has developed specific technical means to deal with these variables. Mapping, projection, calculation, notation and visualization are among architecture’s traditional tools for operating at the very large scale’.
The motorway may not look like it – partly because we are no longer accustomed to think about it as such – but it is a site for and of architecture, a territory where architecture can be critical and active. If the limits of the discipline have narrowed, then one of the functions of a school of architecture must be an attempt occupy those areas of the built environment where architecture is no longer, or has yet to reach. If this is a project about reclamation of a landscape, it is also a challenge to some of the boundaries that surround architecture and often confine it, as Kuma suggests, to the appreciation of isolated objects.
M:NI 2014-15
We tend to think of the motorway as a thing or an object, something that has a singular function. Historically this is how it has been seen, with engineers designing bridges and embankments and suchlike with zeal … These objects like the M3 Urban Motorway, Belfast’s own Westway, are beautiful of course, but they have caused considerable damage to the city they were inflicted upon.
Actually, it’s the fact that we have seen the motorway as a solid object that has caused this problem. The motorway actually is a fluid and dynamic thing, and it should be seen as such: in fact it’s not an organ at all but actually tissue – something that connects rather than is. Once we start to see the motorway as tissue, it opens up new propositions about what the motorway is, is used for and does. This new dynamic and connective view unlocks the stasis of the motorway as edifice, and allows adaptation to happen: adaptation to old contexts that were ignored by the planners, and adaptation to new contexts that have arisen because of or in spite of our best efforts.
Motorways as tissue are more than just infrastructures: they are landscapes. These landscapes can be seen as surfaces on which flows take place, not only of cars, buses and lorries, but also of the globalized goods carried and the lifestyles and mobilities enabled. Here the infinite speed of urban change of thought transcends the declared speed limit [70 mph] of the motorway, in that a consignment of bananas can cause soil erosion in Equador, or the delivery of a new iphone can unlock connections and ideas the world over.
So what is this new landscape to be like? It may be a parallax-shifting, cognitive looking glass; a drone scape of energy transformation; a collective farm, or maybe part of a hospital. But what’s for sure, is that it is never fixed nor static: it pulses like a heartbeat through that most bland of landscapes, the countryside. It transmits forces like a Caribbean hurricane creating surf on an Atlantic Storm Beach: alien forces that mutate and re-form these places screaming into new, unclear and unintended futures.
And this future is clear: the future is urban. In this small rural country, motorways as tissue have made the whole of it: countryside, mountain, sea and town, into one singular, homogenous and hyper-connected, generic city.
Goodbye, place. Hello, surface!
Resumo:
Climate and other environmental change presents a number of challenges for effective food safety. Food production, distribution and consumption takes place within functioning ecosystems but this backdrop is often ignored or treated as static and unchanging. The risks presented by environmental change include novel pests and diseases, often caused by problem species expanding their spatial distributions as they track changing conditions, toxin generation in crops, direct effects on crop and animal production, consequences for trade networks driven by shifting economic viability of production methods in changing environments and finally, wholesale transformation of ecosystems as they respond to novel climatic regimes.
Resumo:
The application of chemometrics in food science has revolutionized the field by allowing the creation of models able to automate a broad range of applications such as food authenticity and food fraud detection. In order to create effective and general models able to address the complexity of real life problems, a vast amount of varied training samples are required. Training dataset has to cover all possible types of sample and instrument variability. However, acquiring a varied amount of samples is a time consuming and costly process, in which collecting samples representative of the real world variation is not always possible, specially in some application fields. To address this problem, a novel framework for the application of data augmentation techniques to spectroscopic data has been designed and implemented. This is a carefully designed pipeline of four complementary and independent blocks which can be finely tuned depending on the desired variance for enhancing model's robustness: a) blending spectra, b) changing baseline, c) shifting along x axis, and d) adding random noise.
This novel data augmentation solution has been tested in order to obtain highly efficient generalised classification model based on spectroscopic data. Fourier transform mid-infrared (FT-IR) spectroscopic data of eleven pure vegetable oils (106 admixtures) for the rapid identification of vegetable oil species in mixtures of oils have been used as a case study to demonstrate the influence of this pioneering approach in chemometrics, obtaining a 10% improvement in classification which is crucial in some applications of food adulteration.
Resumo:
Esta tese tem como foco principal a análise dos principais tipos de amplificação óptica e algumas de suas aplicações em sistemas de comunicação óptica. Para cada uma das tecnologias abordadas, procurou-se definir o estado da arte bem como identificar as oportunidades de desenvolvimento científico relacionadas. Os amplificadores para os quais foi dirigido alguma atenção neste documento foram os amplificadores em fibra dopada com Érbio (EDFA), os amplificadores a semicondutor (SOA) e os amplificadores de Raman (RA). Este trabalho iniciou-se com o estudo e análise dos EDFA’s. Dado o interesse científico e económico que estes amplificadores têm merecido, apenas poucos nichos de investigação estão ainda em aberto. Dentro destes, focá-mo-nos na análise de diferentes perfis de fibra óptica dopada de forma a conseguir uma optimização do desempenho dessas fibras como sistemas de amplificação. Considerando a fase anterior do trabalho como uma base de modelização para sistemas de amplificação com base em fibra e dopantes, evoluiu-se para amplificadores dopados mas em guias de onda (EDWA). Este tipo de amplificador tenta reduzir o volume físico destes dispositivos, mantendo as suas características principais. Para se ter uma forma de comparação de desempenho deste tipo de amplificador com os amplificadores em fibra, foram desenvolvidos modelos de caixa preta (BBM) e os seus parâmetros afinados por forma a termos uma boa modelização e posterior uso deste tipo de amplificiadores em setups de simulação mais complexos. Depois de modelizados e compreendidos os processo em amplificadores dopados, e com vista a adquirir uma visão global comparativa, foi imperativo passar pelo estudo dos processos de amplificação paramétrica de Raman. Esse tipo de amplificação, sendo inerente, ocorre em todas as bandas de propagação em fibra e é bastante flexível. Estes amplificadores foram inicialmente modelizados, e algumas de suas aplicações em redes passivas de acesso foram estudadas. Em especial uma série de requisitos, como por exemplo, a gama de comprimentos de onda sobre os quais existem amplificação e os altos débitos de perdas de inserção, nos levaram à investigação de um processo de amplificação que se ajustasse a eles, especialmente para buscar maiores capacidades de amplificação (nomeadamente longos alcances – superiores a 100 km – e altas razões de divisão – 1:512). Outro processo investigado foi a possibilidade de flexibilização dos parâmetros de comprimento de onda de ganho sem ter que mudar as caractísticas da bomba e se possível, mantendo toda a referenciação no transmissor. Este processo baseou-se na técnica de clamping de ganho já bastante estudada, mas com algumas modificações importantes, nomeadamente a nível do esquema (reflexão apenas num dos extremos) e da modelização do processo. O processo resultante foi inovador pelo recurso a espalhamentos de Rayleigh e Raman e o uso de um reflector de apenas um dos lados para obtenção de laser. Este processo foi modelizado através das equações de propagação e optimizado, tendo sido demonstrado experimentalmente e validado para diferentes tipos de fibras. Nesta linha, e dada a versatilidade do modelo desenvolvido, foi apresentada uma aplicação mais avançada para este tipo de amplificadores. Fazendo uso da sua resposta ultra rápida, foi proposto e analisado um regenerador 2R e analisada por simulação a sua gama de aplicação tendo em vista a sua aplicação sistémica. A parte final deste trabalho concentrou-se nos amplificadores a semiconductor (SOA). Para este tipo de amplificador, os esforços foram postos mais a nível de aplicação do que a nível de sua modelização. As aplicações principais para estes amplificadores foram baseadas em clamping óptico do ganho, visando a combinação de funções lógicas essenciais para a concepção de um latch óptico com base em componentes discretos. Assim, com base num chip de ganho, foi obtido uma porta lógica NOT, a qual foi caracterizada e demonstrada experimentalmente. Esta foi ainda introduzida num esquema de latching de forma a produzir um bi-estável totalmente óptico, o qual também foi demonstrado e caracterizado. Este trabalho é finalizado com uma conclusão geral relatando os subsistemas de amplificação e suas aplicacações.
Resumo:
In Portugal, it was estimated that around 1.95 Mton/year of wood is used in residential wood burning for heating and cooking. Additionally, in the last decades, burnt forest area has also been increasing. These combustions result in high levels of toxic air pollutants and a large perturbation of atmospheric chemistry, interfere with climate and have adverse effects on health. Accurate quantification of the amounts of trace gases and particulate matter emitted from residential wood burning, agriculture and garden waste burning and forest fires on a regional and global basis is essential for various purposes, including: the investigation of several atmospheric processes, the reporting of greenhouse gas emissions, and quantification of the air pollution sources that affect human health at regional scales. In Southern Europe, data on detailed emission factors from biomass burning are rather inexistent. Emission inventories and source apportionment, photochemical and climate change models use default values obtained for US and Northern Europe biofuels. Thus, it is desirable to use more specific locally available data. The objective of this study is to characterise and quantify the contribution of biomass combustion sources to atmospheric trace gases and aerosol concentrations more representative of the national reality. Laboratory (residential wood combustion) and field (agriculture/garden waste burning and experimental wildland fires) sampling experiments were carried out. In the laboratory, after the selection of the most representative wood species and combustion equipment in Portugal, a sampling program to determine gaseous and particulate matter emission rates was set up, including organic and inorganic aerosol composition. In the field, the smoke plumes from agriculture/garden waste and experimental wildland fires were sampled. The results of this study show that the combustion equipment and biofuel type used have an important role in the emission levels and composition. Significant differences between the use of traditional combustion equipment versus modern equipments were also observed. These differences are due to higher combustion efficiency of modern equipment, reflecting the smallest amount of particulate matter, organic carbon and carbon monoxide released. With regard to experimental wildland fires in shrub dominated areas, it was observed that the largest organic fraction in the samples studied was mainly composed by vegetation pyrolysis products. The major organic components in the smoke samples were pyrolysates of vegetation cuticles, mainly comprising steradienes and sterol derivatives, carbohydrates from the breakdown of cellulose, aliphatic lipids from vegetation waxes and methoxyphenols from the lignin thermal degradation. Despite being a banned practice in our country, agriculture/garden waste burning is actually quite common. To assess the particulate matter composition, the smoke from three different agriculture/garden residues have been sampled into 3 different size fractions (PM2.5, PM2.5-10 and PM>10). Despite distribution patterns of organic compounds in particulate matter varied among residues, the amounts of phenolics (polyphenol and guaiacyl derivatives) and organic acids were always predominant over other organic compounds in the organosoluble fraction of smoke. Among biomarkers, levoglucosan, β-sitosterol and phytol were detected in appreciable amounts in the smoke of all agriculture/garden residues. In addition, inositol may be considered as an eventual tracer for the smoke from potato haulm burning. It was shown that the prevailing ambient conditions (such as high humidity in the atmosphere) likely contributed to atmospheric processes (e.g. coagulation and hygroscopic growth), which influenced the particle size characteristics of the smoke tracers, shifting their distribution to larger diameters. An assessment of household biomass consumption was also made through a national scale survey. The information obtained with the survey combined with the databases on emission factors from the laboratory and field tests allowed us to estimate the pollutant amounts emitted in each Portuguese district. In addition to a likely contribution to the improvement of emission inventories, emission factors obtained for tracer compounds in this study can be applied in receptor models to assess the contribution of biomass burning to the levels of atmospheric aerosols and their constituents obtained in monitoring campaigns in Mediterranean Europe.
Resumo:
The development of a compact gamma camera with high spatial resolution is of great interest in Nuclear Medicine as a means to increase the sensitivity of scintigraphy exams and thus allow the early detection of small tumours. Following the introduction of the wavelength-shifting fibre (WSF) gamma camera by Soares et al. and evolution of photodiodes into highly sensitive silicon photomultipliers (SiPMs), this thesis explores the development of a WSF gamma camera using SiPMs to obtain the position information of scintillation events in a continuous CsI(Na) crystal. The design is highly flexible, allowing the coverage of different areas and the development of compact cameras, with very small dead areas at the edges. After initial studies which confirmed the feasibility of applying SiPMs, a prototype with 5 5 cm2 was assembled and tested at room temperature, in an active field-of-view of 10 10 mm2. Calibration and characterisation of intrinsic properties of this prototype were done using 57Co, while extrinsic measurements were performed using a high-resolution parallel-hole collimator and 99mTc. In addition, a small mouse injected with a radiopharmaceutical was imaged with the developed prototype. Results confirm the great potential of SiPMs when applied in a WSF gamma camera, achieving spatial resolution performance superior to the traditional Anger camera. Furthermore, performance can be improved by an optimisation of experimental conditions, in order to minimise and control the undesirable effects of thermal noise and non-uniformity of response of multiple SiPMs. The development and partial characterisation of a larger SiPM WSF gamma camera with 10 10 cm2 for clinical application are also presented.
Resumo:
Os estuários são ecossistemas complexos, onde os processos físicos, químicos e biológicos estão intimamente ligados. A dinâmica bacteriana num estuário reflete a interação e a elevada variação temporal e espacial desses processos. Este trabalho teve como objetivo elucidar as interações entre os processos físicos, fotoquímicos e microbiológicos no sistema estuarino da Ria de Aveiro (Portugal). Para tal, foi realizada uma abordagem inicial no campo, durante a qual as comunidades bacterianas na coluna de água foram caracterizadas em termos de abundância e atividade ao longo de 2 anos. O estudo foi realizado em dois locais distintos, escolhidos por tipificarem as características marinhas e salobras do estuário. Estes locais possuem diferentes hidrodinâmicas, influências fluviais e, quantidade e composição de matéria orgânica. Numa perspectiva mecanicista, foram realizadas simulações laboratoriais no sentido de elucidar a resposta das bactérias à matéria orgânica foto-transformada. As comunidades bacterianas no estuário adaptam-se a diferentes regimes de água doce, desenvolvendo padrões de abundância e atividade distintos nas zonas marinha e salobra. Os elevados caudais dos rios induzem estratificação vertical na zona marinha, promovendo o fluxo de fitoplâncton do mar para o estuário, do bacterioplâncton do estuário para o mar, e estimulam a importação de bactérias aderentes a partículas na zona salobra. O transporte advectivo e os processos de ressuspensão contribuem para aumentar 3 vezes o número de bactérias aderentes a partículas durante os períodos de intensas descargas fluviais. Adicionalmente, a atividade bacteriana no estuário é controlada pela concentração de azoto inerente à variações de água doce. O fornecimento de azoto em associação com a fonte dos substratos bacterianos induzem alterações significativas na produtividade. O padrão de variação vertical de comunidades bacterianas foi distinto nas duas zonas do estuário. Na zona marinha, as bactérias na microcamada superficial (SML) apresentaram taxas de hidrólise mais elevadas, mas menores taxas de incorporação de monómeros e produção de biomassa que na água subjacente (UW), enquanto na zona salobra, as taxas de hidrólise e incorporação foram similares nos dois compartimentos, mas a produtividade foi significativamente mais elevada na SML. Apesar da abundância bacteriana ter sido semelhante na SML e UW, a fração de células aderentes a partículas foi significativamente maior na SML (2-3 vezes), em ambas as zonas do estuário. A integração dos resultados microbiológicos com as variáveis ambientais e hidrológicos mostraram que fortes correntes na zona marinha promovem a mistura vertical, inibindo o estabelecimento de uma comunidade bacteriana na SML distinta da UW. Em contraste, na zona de água salobra, a menor velocidades das correntes fornece as condições adequadas ao aumento da atividade bacteriana na SML. Características específicas do local, tais como a hidrodinâmica e as fontes e composição da matéria orgânica, conduzem também a diferentes graus de enriquecimento superficial de matéria orgânica e inorgânica, influenciando a sua transformação. Em geral, o ambiente da SML estuarina favorece a hidrólise de polímeros, mas inibe a utilização de monómeros, comparativamente com água subjacente. No entanto, as diferenças entre as duas comunidades tendem a atenuar-se com o aumento da atividade heterotrófica na zona salobra. A matéria orgânica dissolvida cromófora (CDOM) das duas zonas do estuário possui diferentes características espectrais, com maior aromaticidade e peso molecular médio (HMW) na zona de água salobra, em comparação com a zona marinha. Nesta zona, a abundância bacteriana correlacionou-se com a350 e a254, sugerindo uma contribuição indireta das bactéria para HMW CDOM. A irradiação do DOM resultou numa diminuição dos valores de a254 e a350, e, em um aumento do declive S275-295 e dos rácios E2:E3 (a250/a365) e SR. No entanto, a extensão de transformações foto-induzidas e as respostas microbianas são dependentes das características iniciais CDOM, inferidas a partir das suas propriedades ópticas. A dinâmica estuarina influencia claramente as atividades heterotróficas e a distribuição dos microorganismos na coluna de água. A entrada de água doce influencia a dinâmica e os principais reguladores das comunidades bacterianas no estuário. Os processos fotoquímicos e microbianos produzem alterações nas propriedades ópticas da CDOM e a combinação desses processos determina o resultado global e o destino da CDOM nos sistemas estuarinos com influência na produtividade nas áreas costeiras adjacente.
Resumo:
Nesta tese relatam-se estudos de fotoluminescência de nanopartículas de óxidos e fosfatos dopados com iões trivalentes de lantanídeos, respectivamente, nanobastonetes de (Gd,Eu)2O3 e (Gd,Yb,Er)2O3 e nanocristais de (Gd,Yb,Tb)PO4, demonstrando-se também aplicações destes materiais em revestimentos inteligentes, sensores de temperatura e bioimagem. Estuda-se a transferência de energia entre os sítios de Eu3+ C2 e S6 dos nanobastonetes Gd2O3. A contribuição dos mecanismos de transferência de energia entre sítios para o tempo de subida 5D0(C2) é descartada a favor da relaxação directa 5D1(C2) 5D0(C2) (i.e., transferência de energia entre níveis). O maior tempo de decaimento do nível 5D0(C2) nos nanobastonetes, relativamente ao valor medido para o mesmo material na forma de microcristais, é atribuído, quer à existência de espaços livres entre nanobastonetes próximos (factor de enchimento ou fracção volúmica), quer à variação do índice de refracção efectivo do meio em torno dos iões Eu3+. A dispersão de nanobastonetes de (Gd,Eu)2O3 em três resinas epoxi comerciais através da cura por UV permite obter nanocompósitos epoxi- (Gd,Eu)2O3. Relatam-se estudos cinéticos e das propriedades térmicas e de fotoluminescência destes nanocompósitos. Estes, preservam as típicas propriedades de emissão do Eu3+, mostrando o potencial do método de cura por UV para obter revistimentos inteligentes e fotoactivos. Considera-se um avanço significativo a realização de uma nanoplataforma óptica, incorporando aquecedor e termómetro e capaz de medir uma ampla gama de temperaturas (300-2000 K) à escala nano, baseada em nanobastonetes de (Gd,Yb,Er)2O3 (termómetros) cuja superfície se encontra revestida com nanopartículas de ouro. A temperature local é calculada usando, quer a distribuição de Boltzmann (300-1050 K) do rácio de intensidades da conversão ascendente 2H11=2!4I15=2/4S3=2!4I15=2, quer a lei de Planck (1200-2000 K) para uma emissão de luz branca atribuída à radiação do corpo negro. Finalmente, estudam-se as propriedades de fotoluminescência correspondentes às conversões ascendente e descendente de energia em nanocristais de (Gd,Yb,Tb)PO4 sintetizados por via hidrotérmica. A relaxividade (ressonância magnética) do 1H destes materiais são investigadas, tendo em vista possíveis aplicações em imagem bimodal (luminescência e ressonância magnética nuclear).
Resumo:
The purpose of this work is to carry out a comprehensive study on the Western Iberian Margin (WIM) circulation my means of numerical modeling, and to postulate what this circulation will be in the future. The adopted approach was the development of a regional ocean model configuration with high resolution, capable of reproducing the largeand small-scale dynamics of the coastal transition zone. Four numerical experiences were carried out according to these objectives: (1) a climatological run, in order to study the system’s seasonal behavior and its mean state; (2) a run forced with real winds and fluxes for period 2001-2011 in order to study the interannual variability of the system; (3) a run forced with mean fields from Global Climate Models (GCMs) for the present, in order to validate GCMs as adequate forcing for regional ocean modeling; (4) a similar run (3) for period 2071-2100, in order to assess possible consequences of a future climate scenario on the hydrography and dynamics of the WIM. Furthermore, two Lagrangian particle studies were carried out: one in order to trace the origin of the upwelled waters along the WIM; the other in order to portrait the patterns of larval dispersal, accumulation and connectivity. The numerical configuration proved to be adequate in the reproduction of the system’s mean state, seasonal characterization and an interannual variability study. There is prevalence of poleward flow at the slope, which coexists with the upwelling jet during summer, although there is evidence of its shifting offshore, and which is associated with the Mediterranean Water flow at deeper levels, suggesting a barotropic character. From the future climate scenario essay, the following conclusions were drawn: there is general warming and freshening of upper level waters; there is still poleward tendency, and despite the upwellingfavorable winds strengthening in summer the respective coastal band becomes more restricted in width and depth. In what concerns larval connectivity and dispersion along the WIM, diel vertical migration was observed to increase recruitment throughout the domain, and while smooth coastlines are better suppliers, there is higher accumulation where the topography is rougher.
Resumo:
This book chapter extends the argument constructed by Oakley in his conference paper ‘Containing gold: Institutional attempts to define and constrict the values of precious metal objects’ presented at ‘Itineraries of the Material’, a conference held at Goethe Universitaet, Frankfurt am Main in 2011. Oakley’s chapter investigates the social forces that define the identities, social pathways and physical movement of objects made of precious metal. It presents a case study in which constitutive substance rather than the conceptual object is the key driver behind the social trajectories of numerous artefacts and their reception by contemporary audiences. This supports the main contention of the book as a whole: the need to reconsider, and when necessary challenge, the dominance of the social biography of objects in the study of material culture. Oakley’s research used historical and ethnographic approaches, including three years’ of ethnographic field research in the jewellery industry. This included training as a precious metal assayer at the Birmingham Assay Office and observing the industry and public response to government proposals to abolish the hallmarking legislation. This fieldwork was augmented by archive, library and object collection research on the histories of assaying and goldsmithing. Oakley presents an analysis of the historical development and contemporary social relevance of hallmarking, a technological process that has never previously been subject to ethnographic study, yet is fundamental to one of the UK’s creative industries.
Resumo:
Intimate Ecologies considers the practice of exhibition-making over the past decade in formal museum and gallery spaces and its relationship to creating a concept of craft in contemporary Britain. Different forms of expression found in traditions of still life painting, film and moving image, poetic text and performance are examined to highlight the complex layers of language at play in exhibitions and within a concept of craft. The thesis presents arguments for understanding the value of embodied material knowledge to aesthetic experience in exhibitions, across a spectrum of human expression. These are supported by reference to exhibition case studies, critical and theoretical works from fields including social anthropology, architecture, art and design history and literary criticism and a range of individual, original works of art. Intimate Ecologies concludes that the museum exhibition, as a creative medium for understanding objects, becomes enriched by close study of material practice, and embodied knowledge that draws on a concept of craft. In turn a concept of craft is refreshed by the makers’ participation in shifting patterns of exhibition-making in cultural spaces that allow the layers of language embedded in complex objects to be experienced from different perspectives. Both art-making and the experience of objects are intimate, and infinitely varied: a vibrant ecology of exhibition-making gives space to this diversity.
Resumo:
Policy in Child and Adolescent Mental Health (CAMH) in England has undergone radical changes in the last 15 years, with far reaching implications for funding models, access to services and service delivery. Using corpus analysis and critical discourse analysis, we explore how childhood, mental health, and CAMHS are constituted in 15 policy documents, 9 pre‐2010, and 6 post 2010. We trace how these constructions have changed over time, and consider the practice implications of these changes. We identify how children’s distress is individualised, through medicalising discourses and shifting understandings of the relationship between socioeconomic context and mental health. This is evidenced in a shift from seeing children’s mental health challenges as produced by social and economic inequities, to a view that children’s mental health must be addressed early to prevent future socio‐economic burden. We consider the implications CAMHS policies for the relationship between children, families, mental health services and the state. The paper concludes by exploring how concepts of ‘parity of esteem’ and ‘stigma reduction’ may inadvertently exacerbate the individualisation of children’s mental health.