894 resultados para Moduli in modern mapping theory


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We use the finite element method to solve coupled problems between pore-fluid flow and heat transfer in fluid-saturated porous rocks. In particular, we investigate the effects of both the hot pluton intrusion and topographically driven horizontal flow on the distributions of the pore-flow velocity and temperature in large-scale hydrothermal systems. Since general mineralization patterns are strongly dependent on distributions of both the pore-fluid velocity and temperature fields, the modern mineralization theory has been used to predict the general mineralization patterns in several realistic hydrothermal systems. The related numerical results have demonstrated that: (1) The existence of a hot intrusion can cause an increase in the maximum value of the pore-fluid velocity in the hydrothermal system. (2) The permeability of an intruded pluton is one of the sensitive parameters to control the pore-fluid flow, heat transfer and ore body formation in hydrothermal systems. (3) The maximum value of the pore-fluid velocity increases when the bottom temperature of the hydrothermal system is increased. (4) The topographically driven flow has significant effects on the pore-fluid flow, temperature distribution and precipitation pattern of minerals in hydrothermal systems. (5) The size of the computational domain may have some effects on the pore-fluid flow and heat transfer, indicating that the size of a hydrothermal system may affect the pore-fluid flow and heat transfer within the system. (C) 2003 Elsevier Science B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Os sistemas de tempo real modernos geram, cada vez mais, cargas computacionais pesadas e dinâmicas, começando-se a tornar pouco expectável que sejam implementados em sistemas uniprocessador. Na verdade, a mudança de sistemas com um único processador para sistemas multi- processador pode ser vista, tanto no domínio geral, como no de sistemas embebidos, como uma forma eficiente, em termos energéticos, de melhorar a performance das aplicações. Simultaneamente, a proliferação das plataformas multi-processador transformaram a programação paralela num tópico de elevado interesse, levando o paralelismo dinâmico a ganhar rapidamente popularidade como um modelo de programação. A ideia, por detrás deste modelo, é encorajar os programadores a exporem todas as oportunidades de paralelismo através da simples indicação de potenciais regiões paralelas dentro das aplicações. Todas estas anotações são encaradas pelo sistema unicamente como sugestões, podendo estas serem ignoradas e substituídas, por construtores sequenciais equivalentes, pela própria linguagem. Assim, o modo como a computação é na realidade subdividida, e mapeada nos vários processadores, é da responsabilidade do compilador e do sistema computacional subjacente. Ao retirar este fardo do programador, a complexidade da programação é consideravelmente reduzida, o que normalmente se traduz num aumento de produtividade. Todavia, se o mecanismo de escalonamento subjacente não for simples e rápido, de modo a manter o overhead geral em níveis reduzidos, os benefícios da geração de um paralelismo com uma granularidade tão fina serão meramente hipotéticos. Nesta perspetiva de escalonamento, os algoritmos que empregam uma política de workstealing são cada vez mais populares, com uma eficiência comprovada em termos de tempo, espaço e necessidades de comunicação. Contudo, estes algoritmos não contemplam restrições temporais, nem outra qualquer forma de atribuição de prioridades às tarefas, o que impossibilita que sejam diretamente aplicados a sistemas de tempo real. Além disso, são tradicionalmente implementados no runtime da linguagem, criando assim um sistema de escalonamento com dois níveis, onde a previsibilidade, essencial a um sistema de tempo real, não pode ser assegurada. Nesta tese, é descrita a forma como a abordagem de work-stealing pode ser resenhada para cumprir os requisitos de tempo real, mantendo, ao mesmo tempo, os seus princípios fundamentais que tão bons resultados têm demonstrado. Muito resumidamente, a única fila de gestão de processos convencional (deque) é substituída por uma fila de deques, ordenada de forma crescente por prioridade das tarefas. De seguida, aplicamos por cima o conhecido algoritmo de escalonamento dinâmico G-EDF, misturamos as regras de ambos, e assim nasce a nossa proposta: o algoritmo de escalonamento RTWS. Tirando partido da modularidade oferecida pelo escalonador do Linux, o RTWS é adicionado como uma nova classe de escalonamento, de forma a avaliar na prática se o algoritmo proposto é viável, ou seja, se garante a eficiência e escalonabilidade desejadas. Modificar o núcleo do Linux é uma tarefa complicada, devido à complexidade das suas funções internas e às fortes interdependências entre os vários subsistemas. Não obstante, um dos objetivos desta tese era ter a certeza que o RTWS é mais do que um conceito interessante. Assim, uma parte significativa deste documento é dedicada à discussão sobre a implementação do RTWS e à exposição de situações problemáticas, muitas delas não consideradas em teoria, como é o caso do desfasamento entre vários mecanismo de sincronização. Os resultados experimentais mostram que o RTWS, em comparação com outro trabalho prático de escalonamento dinâmico de tarefas com restrições temporais, reduz significativamente o overhead de escalonamento através de um controlo de migrações, e mudanças de contexto, eficiente e escalável (pelo menos até 8 CPUs), ao mesmo tempo que alcança um bom balanceamento dinâmico da carga do sistema, até mesmo de uma forma não custosa. Contudo, durante a avaliação realizada foi detetada uma falha na implementação do RTWS, pela forma como facilmente desiste de roubar trabalho, o que origina períodos de inatividade, no CPU em questão, quando a utilização geral do sistema é baixa. Embora o trabalho realizado se tenha focado em manter o custo de escalonamento baixo e em alcançar boa localidade dos dados, a escalonabilidade do sistema nunca foi negligenciada. Na verdade, o algoritmo de escalonamento proposto provou ser bastante robusto, não falhando qualquer meta temporal nas experiências realizadas. Portanto, podemos afirmar que alguma inversão de prioridades, causada pela sub-política de roubo BAS, não compromete os objetivos de escalonabilidade, e até ajuda a reduzir a contenção nas estruturas de dados. Mesmo assim, o RTWS também suporta uma sub-política de roubo determinística: PAS. A avaliação experimental, porém, não ajudou a ter uma noção clara do impacto de uma e de outra. No entanto, de uma maneira geral, podemos concluir que o RTWS é uma solução promissora para um escalonamento eficiente de tarefas paralelas com restrições temporais.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação apresentada para obtenção do Grau de Doutor em Conservação e Restauro, especialidade de Ciências da Conservação, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper attempts to prove that in the years 1735 to 1755 Venice was the birthplace and cradle of Modern architectural theory, generating a major crisis in classical architecture traditionally based on the Vitruvian assumption that it imitates early wooden structures in stone or in marble. According to its rationalist critics such as the Venetian Observant Franciscan friar and architectural theorist Carlo Lodoli (1690-1761) and his nineteenth-century followers, classical architecture is singularly deceptive and not true to the nature of materials, in other words, dishonest and fallacious. This questioning did not emanate from practising architects, but from Lodoli himself– a philosopher and educator of the Venetian patriciate – who had not been trained as an architect. The roots of this crisis lay in a new approach to architecture stemming from the new rationalist philosophy of the Enlightenment age with its emphasis on reason and universal criticism.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The autosomal recessive forms of limb-girdle muscular dystrophies are encoded by at least five distinct genes. The work performed towards the identification of two of these is summarized in this report. This success illustrates the growing importance of genetics in modern nosology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper evaluates the reception of Léon Walras' ideas in Russia before 1920. Despite an unfavourable institutional context, Walras was read by Russian economists. On the one hand, Bortkiewicz and Winiarski, who lived outside Russia and had the opportunity to meet and correspond with Walras, were first class readers and very good ambassadors for Walras' ideas, while on the other, the economists living in Russia were more selective in their readings. They restricted themselves to Walras' Elements of Pure Economics, in particular, its theory of exchange, while ignoring its theory of production. We introduce a cultural argument to explain their selective reading. JEL classification numbers: B 13, B 19.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A series of 4 experiments examined the performance of rats with retrohippocampal lesions on a spatial water-maze task. The animals were trained to find and escape onto a hidden platform after swimming in a large pool of opaque water. The platform was invisible and could not be located using olfactory cues. Successful escape performance required the rats to develop strategies of approaching the correct location with reference solely to distal extramaze cues. The lesions encompassed the entire rostro-caudal extent of the lateral and medial entorhinal cortex, and included parts of the pre- and para-subiculum, angular bundle and subiculum. Groups ECR 1 and 2 sustained only partial damage of the subiculum, while Group ECR+S sustained extensive damage. These groups were compared with sham-lesion and unoperated control groups. In Expt 1A, a profound deficit in spatial localisation was found in groups ECR 1 and ECR+S, the rats receiving all training postoperatively. In Expt 1B, these two groups showed hyperactivity in an open-field. In Expt 2, extensive preoperative training caused a transitory saving in performance of the spatial task by group ECR 2, but comparisons with the groups of Expt 1A revealed no sustained improvement, except on one measure of performance in a post-training transfer test. All rats were then given (Expt 3) training on a cueing procedure using a visible platform. The spatial deficit disappeared but, on returning to the normal hidden platform procedure, it reappeared. Nevertheless, a final transfer test, during which the platform was removed from the apparatus, revealed a dissociation between two independent measures of performance: the rats with ECR lesions failed to search for the hidden platform but repeatedly crossed its correct location accurately during traverses of the entire pool. This partial recovery of performance was not (Expt 4) associated with any ability to discriminate between two locations in the pool. The apparently selective recovery of aspects of spatial memory is discussed in relation to O'Keefe and Nadel's (1978) spatial mapping theory of hippocampal function. We propose a modification of the theory in terms of a dissociation between procedural and declarative subcomponents of spatial memory. The declarative component is a flexible access system in which information is stored in a form independent of action. It is permanently lost after the lesion. The procedural component is "unmasked" by the retrohippocampal lesion giving rise to the partial recovery of spatial localisation performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Decisions taken in modern organizations are often multi-dimensional, involving multiple decision makers and several criteria measured on different scales. Multiple Criteria Decision Making (MCDM) methods are designed to analyze and to give recommendations in this kind of situations. Among the numerous MCDM methods, two large families of methods are the multi-attribute utility theory based methods and the outranking methods. Traditionally both method families require exact values for technical parameters and criteria measurements, as well as for preferences expressed as weights. Often it is hard, if not impossible, to obtain exact values. Stochastic Multicriteria Acceptability Analysis (SMAA) is a family of methods designed to help in this type of situations where exact values are not available. Different variants of SMAA allow handling all types of MCDM problems. They support defining the model through uncertain, imprecise, or completely missing values. The methods are based on simulation that is applied to obtain descriptive indices characterizing the problem. In this thesis we present new advances in the SMAA methodology. We present and analyze algorithms for the SMAA-2 method and its extension to handle ordinal preferences. We then present an application of SMAA-2 to an area where MCDM models have not been applied before: planning elevator groups for high-rise buildings. Following this, we introduce two new methods to the family: SMAA-TRI that extends ELECTRE TRI for sorting problems with uncertain parameter values, and SMAA-III that extends ELECTRE III in a similar way. An efficient software implementing these two methods has been developed in conjunction with this work, and is briefly presented in this thesis. The thesis is closed with a comprehensive survey of SMAA methodology including a definition of a unified framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tutkimus keskittyy kansainväliseen hajauttamiseen suomalaisen sijoittajan näkökulmasta. Tutkimuksen toinen tavoite on selvittää tehostavatko uudet kovarianssimatriisiestimaattorit minimivarianssiportfolion optimointiprosessia. Tavallisen otoskovarianssimatriisin lisäksi optimoinnissa käytetään kahta kutistusestimaattoria ja joustavaa monimuuttuja-GARCH(1,1)-mallia. Tutkimusaineisto koostuu Dow Jonesin toimialaindekseistä ja OMX-H:n portfolioindeksistä. Kansainvälinen hajautusstrategia on toteutettu käyttäen toimialalähestymistapaa ja portfoliota optimoidaan käyttäen kahtatoista komponenttia. Tutkimusaieisto kattaa vuodet 1996-2005 eli 120 kuukausittaista havaintoa. Muodostettujen portfolioiden suorituskykyä mitataan Sharpen indeksillä. Tutkimustulosten mukaan kansainvälisesti hajautettujen investointien ja kotimaisen portfolion riskikorjattujen tuottojen välillä ei ole tilastollisesti merkitsevää eroa. Myöskään uusien kovarianssimatriisiestimaattoreiden käytöstä ei synnytilastollisesti merkitsevää lisäarvoa verrattuna otoskovarianssimatrisiin perustuvaan portfolion optimointiin.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Used in conjunction with biological surveillance, behavioural surveillance provides data allowing for a more precise definition of HIV/STI prevention strategies. In 2008, mapping of behavioural surveillance in EU/EFTA countries was performed on behalf of the European Centre for Disease prevention and Control. METHOD: Nine questionnaires were sent to all 31 member States and EEE/EFTA countries requesting data on the overall behavioural and second generation surveillance system and on surveillance in the general population, youth, men having sex with men (MSM), injecting drug users (IDU), sex workers (SW), migrants, people living with HIV/AIDS (PLWHA), and sexually transmitted infection (STI) clinics patients. Requested data included information on system organisation (e.g. sustainability, funding, institutionalisation), topics covered in surveys and main indicators. RESULTS: Twenty-eight of the 31 countries contacted supplied data. Sixteen countries reported an established behavioural surveillance system, and 13 a second generation surveillance system (combination of biological surveillance of HIV/AIDS and STI with behavioural surveillance). There were wide differences as regards the year of survey initiation, number of populations surveyed, data collection methods used, organisation of surveillance and coordination with biological surveillance. The populations most regularly surveyed are the general population, youth, MSM and IDU. SW, patients of STI clinics and PLWHA are surveyed less regularly and in only a small number of countries, and few countries have undertaken behavioural surveys among migrant or ethnic minorities populations. In many cases, the identification of populations with risk behaviour and the selection of populations to be included in a BS system have not been formally conducted, or are incomplete. Topics most frequently covered are similar across countries, although many different indicators are used. In most countries, sustainability of surveillance systems is not assured. CONCLUSION: Although many European countries have established behavioural surveillance systems, there is little harmonisation as regards the methods and indicators adopted. The main challenge now faced is to build and maintain organised and functional behavioural and second generation surveillance systems across Europe, to increase collaboration, to promote robust, sustainable and cost-effective data collection methods, and to harmonise indicators.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The major objective of this thesis is to describe and analyse how a railcarrier is engaged in an intermodal freight transportation network through its role and position. Because of the fact that the role as a conceptualisation has a lot of parallels with the position, both these phenomena are evaluated theoretically and empirically. VR Cargo (a strategical business unitof the Finnish railway company VR Ltd.) was chosen to be the focal firm surrounded by the actors of the focal net. Because of the fact that networks are sets of relationships rather than sets of actors, it is essential to describe the dimensions of the relationships created through the time thus having a past, presentand future. The roles are created during long common history shared by the actors especially when IM networks are considered. The presence of roles is embeddedin the tasks, and the future is anchored to the expectations. Furthermore, in this study role refers to network dynamics, and to incremental and radical changes in the network, in a similar way as position refers to stability and to the influences of bonded structures. The main purpose of the first part of the study was to examine how the two distinctive views that have a dominant position in modern logistics ¿ the network view (particularly IMP-based network approach) and the managerial view (represented by Supply Chain Management) differ, especially when intermodalism is under consideration. In this study intermodalism was defined as a form of interorganisational behaviour characterized by the physical movement of unitized goods with Intermodal Transport Units, using more than one mode as performed by the net of operators. In this particular stage the study relies mainly on theoretical evaluation broadened by some discussions with the practitioners. This is essential, because the continuous dialogue between theory and practice is highly emphasized. Some managerial implications are discussed on the basis of the theoretical examination. A tentative model for empirical analysis in subsequent research is suggested. The empirical investigation, which relies on the interviews among the members in the focal net, shows that the major role of the focal company in the network is the common carrier. This role has some behavioural and functional characteristics, such as an executive's disclosure expressing strategic will attached with stable and predictable managerial and organisational behaviour. Most important is the notion that the focal company is neutral for all the other operators, and willing to enhance and strengthen the collaboration with all the members in the IM network. This also means that all the accounts are aimed at being equal in terms of customer satisfaction. Besides, the adjustments intensify the adopted role. However, the focal company is also obliged tosustain its role as it still has a government-erected right to maintain solely the railway operations on domestic tracks. In addition, the roles of a dominator, principal, partner, subcontractor, and integrator were present appearing either in a dyadic relationship or in net(work) context. In order to reveal differentroles, a dualistic interpretation of the concept of role/position was employed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The uncertainty of any analytical determination depends on analysis and sampling. Uncertainty arising from sampling is usually not controlled and methods for its evaluation are still little known. Pierre Gy’s sampling theory is currently the most complete theory about samplingwhich also takes the design of the sampling equipment into account. Guides dealing with the practical issues of sampling also exist, published by international organizations such as EURACHEM, IUPAC (International Union of Pure and Applied Chemistry) and ISO (International Organization for Standardization). In this work Gy’s sampling theory was applied to several cases, including the analysis of chromite concentration estimated on SEM (Scanning Electron Microscope) images and estimation of the total uncertainty of a drug dissolution procedure. The results clearly show that Gy’s sampling theory can be utilized in both of the above-mentioned cases and that the uncertainties achieved are reliable. Variographic experiments introduced in Gy’s sampling theory are beneficially applied in analyzing the uncertainty of auto-correlated data sets such as industrial process data and environmental discharges. The periodic behaviour of these kinds of processes can be observed by variographic analysis as well as with fast Fourier transformation and auto-correlation functions. With variographic analysis, the uncertainties are estimated as a function of the sampling interval. This is advantageous when environmental data or process data are analyzed as it can be easily estimated how the sampling interval is affecting the overall uncertainty. If the sampling frequency is too high, unnecessary resources will be used. On the other hand, if a frequency is too low, the uncertainty of the determination may be unacceptably high. Variographic methods can also be utilized to estimate the uncertainty of spectral data produced by modern instruments. Since spectral data are multivariate, methods such as Principal Component Analysis (PCA) are needed when the data are analyzed. Optimization of a sampling plan increases the reliability of the analytical process which might at the end have beneficial effects on the economics of chemical analysis,

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The focus in this thesis is to study both technical and economical possibilities of novel on-line condition monitoring techniques in underground low voltage distribution cable networks. This thesis consists of literature study about fault progression mechanisms in modern low voltage cables, laboratory measurements to determine the base and restrictions of novel on-line condition monitoring methods, and economic evaluation, based on fault statistics and information gathered from Finnish distribution system operators. This thesis is closely related to master’s thesis “Channel Estimation and On-line Diagnosis of LV Distribution Cabling”, which focuses more on the actual condition monitoring methods and signal theory behind them.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis discusses the basic problem of the modern portfolio theory about how to optimise the perfect allocation for an investment portfolio. The theory provides a solution for an efficient portfolio, which minimises the risk of the portfolio with respect to the expected return. A central feature for all the portfolios on the efficient frontier is that the investor needs to provide the expected return for each asset. Market anomalies are persistent patterns seen in the financial markets, which cannot be explained with the current asset pricing theory. The goal of this thesis is to study whether these anomalies can be observed among different asset classes. Finally, if persistent patterns are found, it is investigated whether the anomalies hold valuable information for determining the expected returns used in the portfolio optimization Market anomalies and investment strategies based on them are studied with a rolling estimation window, where the return for the following period is always based on historical information. This is also crucial when rebalancing the portfolio. The anomalies investigated within this thesis are value, momentum, reversal, and idiosyncratic volatility. The research data includes price series of country level stock indices, government bonds, currencies, and commodities. The modern portfolio theory and the views given by the anomalies are combined by utilising the Black-Litterman model. This makes it possible to optimise the portfolio so that investor’s views are taken into account. When constructing the portfolios, the goal is to maximise the Sharpe ratio. Significance of the results is studied by assessing if the strategy yields excess returns in a relation to those explained by the threefactormodel. The most outstanding finding is that anomaly based factors include valuable information to enhance efficient portfolio diversification. When the highest Sharpe ratios for each asset class are picked from the test factors and applied to the Black−Litterman model, the final portfolio results in superior riskreturn combination. The highest Sharpe ratios are provided by momentum strategy for stocks and long-term reversal for the rest of the asset classes. Additionally, a strategy based on the value effect was highly appealing, and it basically performs as well as the previously mentioned Sharpe strategy. When studying the anomalies, it is found, that 12-month momentum is the strongest effect, especially for stock indices. In addition, a high idiosyncratic volatility seems to be positively correlated with country indices on stocks.