865 resultados para Definition of cuisine
Resumo:
A general reduced dimensionality finite field nuclear relaxation method for calculating vibrational nonlinear optical properties of molecules with large contributions due to anharmonic motions is introduced. In an initial application to the umbrella (inversion) motion of NH3 it is found that difficulties associated with a conventional single well treatment are overcome and that the particular definition of the inversion coordinate is not important. Future applications are described
Resumo:
Es mostra que, gracies a una extensió en la definició dels Índexs Moleculars Topològics, s'arriba a la formulació d'índexs relacionats amb la teoria de la Semblança Molecular Quàntica. Es posa de manifest la connexió entre les dues metodologies: es revela que un marc de treball teòric sòlidament fonamentat sobre la teoria de la Mecànica Quàntica es pot connectar amb una de les tècniques més antigues relacionades amb els estudis de QSPR. Es mostren els resultats per a dos casos d'exemple d'aplicació d'ambdues metodologies
Resumo:
This dissertation studies the effects of Information and Communication Technologies (ICT) on the banking sector and the payments system. It provides insight into how technology-induced changes occur, by exploring both the nature and scope of main technology innovations and evidencing their economic implications for banks and payment systems. Some parts in the dissertation are descriptive. They summarise the main technological developments in the field of finance and link them to economic policies. These parts are complemented with sections of the study that focus on assessing the extent of technology application to banking and payment activities. Finally, it includes also some work which borrows from the economic literature on banking. The need for an interdisciplinary approach arises from the complexity of the topic and the rapid path of change to which it is subject. The first chapter provides an overview of the influence of developments in ICT on the evolution of financial services and international capital flows. We include main indicators and discuss innovation in the financial sector, exchange rates and international capital flows. The chapter concludes with impact analysis and policy options regarding the international financial architecture, some monetary policy issues and the role of international institutions. The second chapter is a technology assessment study that focuses on the relationship between technology and money. The application of technology to payments systems is transforming the way we use money and, in some instances, is blurring the definition of what constitutes money. This chapter surveys the developments in electronic forms of payment and their relationship to the banking system. It also analyses the challenges posed by electronic money for regulators and policy makers, and in particular the opportunities created by two simultaneous processes: the Economic and Monetary Union and the increasing use of electronic payment instruments. The third chapter deals with the implications of developments in ICT on relationship banking. The financial intermediation literature explains relationship banking as a type of financial intermediation characterised by proprietary information and multiple interactions with customers. This form of banking is important for the financing of small and medium-sized enterprises. We discuss the effects of ICT on the banking sector as a whole and then apply these developments to the case of relationship banking. The fourth chapter is an empirical study of the effects of technology on the banking business, using a sample of data from the Spanish banking industry. The design of the study is based on some of the events described in the previous chapters, and also draws from the economic literature on banking. The study shows that developments in information management have differential effects on wholesale and retail banking activities. Finally, the last chapter is a technology assessment study on electronic payments systems in Spain and the European Union. It contains an analysis of existing payment systems and ongoing or planned initiatives in Spain. It forms part of a broader project comprising a series of country-specific analyses covering ten European countries. The main issues raised across the countries serve as the starting point to discuss implications of the development of electronic money for regulation and policies, and in particular, for monetary-policy making.
Resumo:
The main objective pursued in this thesis targets the development and systematization of a methodology that allows addressing management problems in the dynamic operation of Urban Wastewater Systems. The proposed methodology will suggest operational strategies that can improve the overall performance of the system under certain problematic situations through a model-based approach. The proposed methodology has three main steps: The first step includes the characterization and modeling of the case-study, the definition of scenarios, the evaluation criteria and the operational settings that can be manipulated to improve the system’s performance. In the second step, Monte Carlo simulations are launched to evaluate how the system performs for a wide range of operational settings combinations, and a global sensitivity analysis is conducted to rank the most influential operational settings. Finally, the third step consists on a screening methodology applying a multi-criteria analysis to select the best combinations of operational settings.
Resumo:
La present tesi proposa una metodología per a la simulació probabilística de la fallada de la matriu en materials compòsits reforçats amb fibres de carboni, basant-se en l'anàlisi de la distribució aleatòria de les fibres. En els primers capítols es revisa l'estat de l'art sobre modelització matemàtica de materials aleatoris, càlcul de propietats efectives i criteris de fallada transversal en materials compòsits. El primer pas en la metodologia proposada és la definició de la determinació del tamany mínim d'un Element de Volum Representatiu Estadístic (SRVE) . Aquesta determinació es du a terme analitzant el volum de fibra, les propietats elàstiques efectives, la condició de Hill, els estadístics de les components de tensió i defromació, la funció de densitat de probabilitat i les funcions estadístiques de distància entre fibres de models d'elements de la microestructura, de diferent tamany. Un cop s'ha determinat aquest tamany mínim, es comparen un model periòdic i un model aleatori, per constatar la magnitud de les diferències que s'hi observen. Es defineix, també, una metodologia per a l'anàlisi estadístic de la distribució de la fibra en el compòsit, a partir d'imatges digitals de la secció transversal. Aquest anàlisi s'aplica a quatre materials diferents. Finalment, es proposa un mètode computacional de dues escales per a simular la fallada transversal de làmines unidireccionals, que permet obtenir funcions de densitat de probabilitat per a les variables mecàniques. Es descriuen algunes aplicacions i possibilitats d'aquest mètode i es comparen els resultats obtinguts de la simulació amb valors experimentals.
Resumo:
En la literatura sobre mecànica quàntica és freqüent trobar descriptors basats en la densitat de parells o la densitat electrònica, amb un èxit divers segons les aplicacions que atenyin. Per tal de que tingui sentit químic un descriptor ha de donar la definició d'un àtom en una molècula, o ésser capaç d'identificar regions de l'espai molecular associades amb algun concepte químic (com pot ser un parell solitari o zona d'enllaç, entre d'altres). En aquesta línia, s'han proposat diversos esquemes de partició: la teoria d'àtoms en molècules (AIM), la funció de localització electrònica (ELF), les cel·les de Voroni, els àtoms de Hirshfeld, els àtoms difusos, etc. L'objectiu d'aquesta tesi és explorar descriptors de la densitat basats en particions de l'espai molecular del tipus AIM, ELF o àtoms difusos, analitzar els descriptors existents amb diferents nivells de teoria, proposar nous descriptors d'aromaticitat, així com estudiar l'habilitat de totes aquestes eines per discernir entre diferents mecanismes de reacció.
Resumo:
This paper was written within the context of the research project “The development of teacher’ associative organizations and unionism (1889-1990)” funded by the Fundação para a Ciência e Tecnologia (Foundation for Science and Technology). Five important congresses about secondary education were organized in Portugal between 1927 and 1931. These congresses served to claim the rights of teachers and the consolidation of the class, as well as to promote the discussion of scientific and pedagogical problems. In these congresses, the presence of female teachers was residual. However, the few teachers who participated had a significant contribution to the definition of secondary education during the following decades. Among other issues, it contributed to the discussion of female education and to analyze the importance of Biology and Physical Education in high schools. This paper presents the analysis of the minutes of the 1927s and 1928s Congresses. This analysis allowed the assessment of the important role played by a group of teachers to define, at the end of the first third of the 20th century, the future guidelines of Portuguese secondary education. It also reported that these teachers were pioneers who opened the way for the increasing number of teachers in secondary education during the 20th century.
Resumo:
The introduction of my contribution contains a brief information on the Faculty of Architecture of the Slovak University of Technology in Bratislava (FA STU) and the architectural research performed at this institution. Schemes and priorities of our research in architecture have changed several times since the very beginning in early 50’s. The most significant change occurred after “the velvet revolution” in 1989. Since 1990 there have been several sources to support research at universities. The significant part of my contribution is rooted in my own research experience since the time I had joined FA STU in 1975 as a young architect and researcher. The period of the 80’s is characterized by the first unintentional attempts to do “research by design” and my “scientific” achievements as by-products of my design work. Some of them resulted in the following issues: conception of mezzo-space, theory of the complex perception of architectural space and definition of basic principles of ecologically conscious architecture. Nowadays I continue my research by design within the application of so called solar envelope in urban scale with my students.
Resumo:
Quality management Self-evaluation of the organisation Citizens/customers satisfaction Impact on society evaluation Key performance evaluation Good practices comparison (Benchmarking) Continuous improvement In professional environments, when quality assessment of museums is discussed, one immediately thinks of the honourableness of the directors and curators, the erudition and specialisation of knowledge, the diversity of the gathered material and study of the collections, the collections conservation methods and environmental control, the regularity and notoriety of the exhibitions and artists, the building’s architecture and site, the recreation of environments, the museographic equipment design. We admit that the roles and attributes listed above can contribute to the definition of a specificity of museological good practice within a hierarchised functional perspective (the museum functions) and for the classification of museums according to a scale, validated between peers, based on “installed” appreciation criteria, enforced from above downwards, according to the “prestige” of the products and of those who conceive them, but that say nothing about the effective satisfaction of the citizen/customers and the real impact on society. There is a lack of evaluation instruments that would give us a return of all that the museum is and represents in contemporary society, focused on being and on the relation with the other, in detriment of the ostentatious possession and of the doing in order to meet one’s duties. But it is only possible to evaluate something by measurement and comparison, on the basis of well defined criteria, from a common grid, implicating all of the actors in the self-evaluation, in the definition of the aims to fulfil and in the obtaining of results.
Resumo:
The behavior of the Asian summer monsoon is documented and compared using the European Centre for Medium-Range Weather Forecasts (ECMWF) Reanalysis (ERA) and the National Centers for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) Reanalysis. In terms of seasonal mean climatologies the results suggest that, in several respects, the ERA is superior to the NCEP-NCAR Reanalysis. The overall better simulation of the precipitation and hence the diabatic heating field over the monsoon domain in ERA means that the analyzed circulation is probably nearer reality. In terms of interannual variability, inconsistencies in the definition of weak and strong monsoon years based on typical monsoon indices such as All-India Rainfall (AIR) anomalies and the large-scale wind shear based dynamical monsoon index (DMI) still exist. Two dominant modes of interannual variability have been identified that together explain nearly 50% of the variance. Individually, they have many features in common with the composite flow patterns associated with weak and strong monsoons, when defined in terms of regional AIR anomalies and the large-scale DMI. The reanalyses also show a common dominant mode of intraseasonal variability that describes the latitudinal displacement of the tropical convergence zone from its oceanic-to-continental regime and essentially captures the low-frequency active/break cycles of the monsoon. The relationship between interannual and intraseasonal variability has been investigated by considering the probability density function (PDF) of the principal component of the dominant intraseasonal mode. Based on the DMI, there is an indication that in years with a weaker monsoon circulation, the PDF is skewed toward negative values (i,e., break conditions). Similarly, the PDFs for El Nino and La Nina years suggest that El Nino predisposes the system to more break spells, although the sample size may limit the statistical significance of the results.
Resumo:
Within this paper modern techniques such as satellite image analysis and tools provided by geographic information systems (GIS.) are exploited in order to extend and improve existing techniques for mapping the spatial distribution of sediment transport processes. The processes of interest comprise mass movements such as solifluction, slope wash, dirty avalanches and rock- and boulder falls. They differ considerably in nature and therefore different approaches for the derivation of their spatial extent are required. A major challenge is addressing the differences between the comparably coarse resolution of the available satellite data (Landsat TM/ETM+, 30 in x 30 m) and the actual scale of sediment transport in this environment. A three-stepped approach has been developed which is based on the concept of Geomorphic Process Units (GPUs): parameterization, process area delineation and combination. Parameters include land cover from satellite data and digital elevation model derivatives. Process areas are identified using a hierarchical classification scheme utilizing thresholds and definition of topology. The approach has been developed for the Karkevagge in Sweden and could be successfully transferred to the Rabotsbekken catchment at Okstindan, Norway using similar input data. Copyright (C) 2008 John Wiley & Sons, Ltd.
Resumo:
With the rapid development in technology over recent years, construction, in common with many areas of industry, has become increasingly complex. It would, therefore, seem to be important to develop and extend the understanding of complexity so that industry in general and in this case the construction industry can work with greater accuracy and efficiency to provide clients with a better service. This paper aims to generate a definition of complexity and a method for its measurement in order to assess its influence upon the accuracy of the quantity surveying profession in UK new build office construction. Quantitative data came from an analysis of twenty projects of varying size and value and qualitative data came from interviews with professional quantity surveyors. The findings highlight the difficulty in defining and measuring project complexity. The correlation between accuracy and complexity was not straightforward, being subjected to many extraneous variables, particularly the impact of project size. Further research is required to develop a better measure of complexity. This is in order to improve the response of quantity surveyors, so that an appropriate level of effort can be applied to individual projects, permitting greater accuracy and enabling better resource planning within the profession.
Resumo:
The development of high throughput techniques ('chip' technology) for measurement of gene expression and gene polymorphisms (genomics), and techniques for measuring global protein expression (proteomics) and metabolite profile (metabolomics) are revolutionising life science research, including research in human nutrition. In particular, the ability to undertake large-scale genotyping and to identify gene polymorphisms that determine risk of chronic disease (candidate genes) could enable definition of an individual's risk at an early age. However, the search for candidate genes has proven to be more complex, and their identification more elusive, than previously thought. This is largely due to the fact that much of the variability in risk results from interactions between the genome and environmental exposures. Whilst the former is now very well defined via the Human Genome Project, the latter (e.g. diet, toxins, physical activity) are poorly characterised, resulting in inability to account for their confounding effects in most large-scale candidate gene studies. The polygenic nature of most chronic diseases offers further complexity, requiring very large studies to disentangle relatively weak impacts of large numbers of potential 'risk' genes. The efficacy of diet as a preventative strategy could also be considerably increased by better information concerning gene polymorphisms that determine variability in responsiveness to specific diet and nutrient changes. Much of the limited available data are based on retrospective genotyping using stored samples from previously conducted intervention trials. Prospective studies are now needed to provide data that can be used as the basis for provision of individualised dietary advice and development of food products that optimise disease prevention. Application of the new technologies in nutrition research offers considerable potential for development of new knowledge and could greatly advance the role of diet as a preventative disease strategy in the 21st century. Given the potential economic and social benefits offered, funding for research in this area needs greater recognition, and a stronger strategic focus, than is presently the case. Application of genomics in human health offers considerable ethical and societal as well as scientific challenges. Economic determinants of health care provision are more likely to resolve such issues than scientific developments or altruistic concerns for human health.
Resumo:
Active Networks can be seen as an evolution of the classical model of packet-switched networks. The traditional and ”passive” network model is based on a static definition of the network node behaviour. Active Networks propose an “active” model where the intermediate nodes (switches and routers) can load and execute user code contained in the data units (packets). Active Networks are a programmable network model, where bandwidth and computation are both considered shared network resources. This approach opens up new interesting research fields. This paper gives a short introduction of Active Networks, discusses the advantages they introduce and presents the research advances in this field.
Resumo:
The definitions of the base units of the international system of units have been revised many times since the idea of such an international system was first conceived at the time of the French revolution. The objective today is to define all our units in terms of 'invariants of nature', i.e. by referencing our units to the fundamental constants of physics, or the properties of atoms, rather than the characteristics of our planet or of artefacts. This situation is reviewed, particularly in regard to finding a new definition of the kilogram to replace its present definition in terms of a prototype material artefact.