38 resultados para PHASE CALIBRATION SOURCES
em Helda - Digital Repository of University of Helsinki
Resumo:
This thesis examines the feasibility of a forest inventory method based on two-phase sampling in estimating forest attributes at the stand or substand levels for forest management purposes. The method is based on multi-source forest inventory combining auxiliary data consisting of remote sensing imagery or other geographic information and field measurements. Auxiliary data are utilized as first-phase data for covering all inventory units. Various methods were examined for improving the accuracy of the forest estimates. Pre-processing of auxiliary data in the form of correcting the spectral properties of aerial imagery was examined (I), as was the selection of aerial image features for estimating forest attributes (II). Various spatial units were compared for extracting image features in a remote sensing aided forest inventory utilizing very high resolution imagery (III). A number of data sources were combined and different weighting procedures were tested in estimating forest attributes (IV, V). Correction of the spectral properties of aerial images proved to be a straightforward and advantageous method for improving the correlation between the image features and the measured forest attributes. Testing different image features that can be extracted from aerial photographs (and other very high resolution images) showed that the images contain a wealth of relevant information that can be extracted only by utilizing the spatial organization of the image pixel values. Furthermore, careful selection of image features for the inventory task generally gives better results than inputting all extractable features to the estimation procedure. When the spatial units for extracting very high resolution image features were examined, an approach based on image segmentation generally showed advantages compared with a traditional sample plot-based approach. Combining several data sources resulted in more accurate estimates than any of the individual data sources alone. The best combined estimate can be derived by weighting the estimates produced by the individual data sources by the inverse values of their mean square errors. Despite the fact that the plot-level estimation accuracy in two-phase sampling inventory can be improved in many ways, the accuracy of forest estimates based mainly on single-view satellite and aerial imagery is a relatively poor basis for making stand-level management decisions.
Resumo:
In this paper both documentary and natural proxy data have been used to improve the accuracy of palaeoclimatic knowledge in Finland since the 18th century. Early meteorological observations from Turku (1748-1800) were analyzed first as a potential source of climate variability. The reliability of the calculated mean temperatures was evaluated by comparing them with those of contemporary temperature records from Stockholm, St. Petersburg and Uppsala. The resulting monthly, seasonal and yearly mean temperatures from 1748 to 1800 were compared with the present day mean values (1961-1990): the comparison suggests that the winters of the period 1749-1800 were 0.8 ºC colder than today, while the summers were 0.4 ºC warmer. Over the same period, springs were 0.9 ºC and autumns 0.1 ºC colder than today. Despite their uncertainties when compared with modern meteorological data, early temperature measurements offer direct and daily information about the weather for all months of the year, in contrast with other proxies. Secondly, early meteorological observations from Tornio (1737-1749) and Ylitornio (1792-1838) were used to study the temporal behaviour of the climate-tree growth relationship during the past three centuries in northern Finland. Analyses showed that the correlations between ring widths and mid-summer (July) temperatures did not vary significantly as a function of time. Early (June) and late summer (August) mean temperatures were secondary to mid-summer temperatures in controlling the radial growth. According the dataset used, there was no clear signature of temporally reduced sensitivity of Scots pine ring widths to mid-summer temperatures over the periods of early and modern meteorological observations. Thirdly, plant phenological data with tree-rings from south-west Finland since 1750 were examined as a palaeoclimate indicator. The information from the fragmentary, partly overlapping, partly nonsystematically biased plant phenological records of 14 different phenomena were combined into one continuous time series of phenological indices. The indices were found to be reliable indicators of the February to June temperature variations. In contrast, there was no correlation between the phenological indices and the precipitation data. Moreover, the correlations between the studied tree-rings and spring temperatures varied as a function of time and hence, their use in palaeoclimate reconstruction is questionable. The use of present tree-ring datasets for palaeoclimate purposes may become possible after the application of more sophisticated calibration methods. Climate variability since the 18th century is perhaps best seen in the fourth paper study of the multiproxy spring temperature reconstruction of south-west Finland. With the help of transfer functions, an attempt has been made to utilize both documentary and natural proxies. The reconstruction was verified with statistics showing a high degree of validity between the reconstructed and observed temperatures. According to the proxies and modern meteorological observations from Turku, springs have become warmer and have featured a warming trend since around the 1850s. Over the period of 1750 to around 1850, springs featured larger multidecadal low-frequency variability, as well as a smaller range of annual temperature variations. The coldest springtimes occurred around the 1840s and 1850s and the first decade of the 19th century. Particularly warm periods occurred in the 1760s, 1790s, 1820s, 1930s, 1970s and from 1987 onwards, although in this period cold springs occurred, such as the springs of 1994 and 1996. On the basis of the available material, long-term temperature changes have been related to changes in the atmospheric circulation, such as the North Atlantic Oscillation (February-June).
Resumo:
The range of consumer health and medicines information sources has diversified along with the increased use of the Internet. This has led to a drive to develop medicines information services and to better incorporate the Internet and e-mail into routine practice in health care and in community pharmacies. To support the development of such services more information is needed about the use of online information by consumers, particularly of those who may be the most likely to use and to benefit from the new sources and modes of medicines communication. This study explored the role and utilization of the Internet-based medicines information and information services in the context of a wider network of information sources accessible to the public in Finland. The overall aim was to gather information to develop better and more accessible sources of information for consumers and services to better meet the needs of consumers. Special focus was on the needs and information behavior among people with depression and using antidepressant medicines. This study applied both qualitative and quantitative methods. Consumer medicines information needs and sources were identified by analyzing the utilization of the University Pharmacy operated national drug information call center (Study I) and surveying Finnish adults (n=2348) use of the different medicines information sources (Study II). The utilization of the Internet as a source of antidepressant information among people with depression was explored by focus group discussions among people with depression and with current or past use of the antidepressant(s) (n=29, Studies III & IV). Pharmacy response to the needs of consumers in term of providing e-mail counseling was assessed by conducting a virtual pseudo customer study among the Finnish community pharmacies (n=161, Study V). Physicians and pharmacists were the primary sources of medicines information. People with mental disorders were more frequent users of telephone- and Internet-based medicines information sources and patient information leaflets than people without mental disorders. These sources were used to complement rather than replace information provided face-to-face by health professionals. People with depression used the Internet to seek facts about antidepressants, to share experiences with peers, and for the curiosity. They described that the access to online drug information was empowering. Some people reported lacking the skills necessary to assess the quality of online information. E-mail medication counseling services provided by community pharmacies were rare and varied in quality. Study results suggest that rather than discouraging the use of the Internet, health professionals should direct patients to use accurate and reliable sources of online medicines information. Health care providers, including community pharmacies should also seek to develop new ways of communicating information about medicines with consumers. This study determined that people with depression and using antidepressants need services enabling interactive communication not only with health care professionals, but also with peers. Further research should be focused on developing medicines information service facilitating communication among different patient and consumer groups.
Resumo:
Miniaturized analytical devices, such as heated nebulizer (HN) microchips studied in this work, are of increasing interest owing to benefits like faster operation, better performance, and lower cost relative to conventional systems. HN microchips are microfabricated devices that vaporize liquid and mix it with gas. They are used with low liquid flow rates, typically a few µL/min, and have previously been utilized as ion sources for mass spectrometry (MS). Conventional ion sources are seldom feasible at such low flow rates. In this work HN chips were developed further and new applications were introduced. First, a new method for thermal and fluidic characterization of the HN microchips was developed and used to study the chips. Thermal behavior of the chips was also studied by temperature measurements and infrared imaging. An HN chip was applied to the analysis of crude oil – an extremely complex sample – by microchip atmospheric pressure photoionization (APPI) high resolution mass spectrometry. With the chip, the sample flow rate could be reduced significantly without loss of performance and with greatly reduced contamination of the MS instrument. Thanks to its suitability to high temperature, microchip APPI provided efficient vaporization of nonvolatile compounds in crude oil. The first microchip version of sonic spray ionization (SSI) was presented. Ionization was achieved by applying only high (sonic) speed nebulizer gas to an HN microchip. SSI significantly broadens the range of analytes ionizable with the HN chips, from small stable molecules to labile biomolecules. The analytical performance of the microchip SSI source was confirmed to be acceptable. The HN microchips were also used to connect gas chromatography (GC) and capillary liquid chromatography (LC) to MS, using APPI for ionization. Microchip APPI allows efficient ionization of both polar and nonpolar compounds whereas with the most popular electrospray ionization (ESI) only polar and ionic molecules are ionized efficiently. The combination of GC with MS showed that, with HN microchips, GCs can easily be used with MS instruments designed for LC-MS. The presented analytical methods showed good performance. The first integrated LC–HN microchip was developed and presented. In a single microdevice, there were structures for a packed LC column and a heated nebulizer. Nonpolar and polar analytes were efficiently ionized by APPI. Ionization of nonpolar and polar analytes is not possible with previously presented chips for LC–MS since they rely on ESI. Preliminary quantitative performance of the new chip was evaluated and the chip was also demonstrated with optical detection. A new ambient ionization technique for mass spectrometry, desorption atmospheric pressure photoionization (DAPPI), was presented. The DAPPI technique is based on an HN microchip providing desorption of analytes from a surface. Photons from a photoionization lamp ionize the analytes via gas-phase chemical reactions, and the ions are directed into an MS. Rapid analysis of pharmaceuticals from tablets was successfully demonstrated as an application of DAPPI.
Resumo:
Solid materials can exist in different physical structures without a change in chemical composition. This phenomenon, known as polymorphism, has several implications on pharmaceutical development and manufacturing. Various solid forms of a drug can possess different physical and chemical properties, which may affect processing characteristics and stability, as well as the performance of a drug in the human body. Therefore, knowledge and control of the solid forms is fundamental to maintain safety and high quality of pharmaceuticals. During manufacture, harsh conditions can give rise to unexpected solid phase transformations and therefore change the behavior of the drug. Traditionally, pharmaceutical production has relied on time-consuming off-line analysis of production batches and finished products. This has led to poor understanding of processes and drug products. Therefore, new powerful methods that enable real time monitoring of pharmaceuticals during manufacturing processes are greatly needed. The aim of this thesis was to apply spectroscopic techniques to solid phase analysis within different stages of drug development and manufacturing, and thus, provide a molecular level insight into the behavior of active pharmaceutical ingredients (APIs) during processing. Applications to polymorph screening and different unit operations were developed and studied. A new approach to dissolution testing, which involves simultaneous measurement of drug concentration in the dissolution medium and in-situ solid phase analysis of the dissolving sample, was introduced and studied. Solid phase analysis was successfully performed during different stages, enabling a molecular level insight into the occurring phenomena. Near-infrared (NIR) spectroscopy was utilized in screening of polymorphs and processing-induced transformations (PITs). Polymorph screening was also studied with NIR and Raman spectroscopy in tandem. Quantitative solid phase analysis during fluidized bed drying was performed with in-line NIR and Raman spectroscopy and partial least squares (PLS) regression, and different dehydration mechanisms were studied using in-situ spectroscopy and partial least squares discriminant analysis (PLS-DA). In-situ solid phase analysis with Raman spectroscopy during dissolution testing enabled analysis of dissolution as a whole, and provided a scientific explanation for changes in the dissolution rate. It was concluded that the methods applied and studied provide better process understanding and knowledge of the drug products, and therefore, a way to achieve better quality.
Resumo:
This work examines the urban modernization of San José, Costa Rica, between 1880 and 1930, using a cultural approach to trace the emergence of the bourgeois city in a small Central American capital, within the context of order and progress. As proposed by Henri Lefebvre, Manuel Castells and Edward Soja, space is given its rightful place as protagonist. The city, subject of this study, is explored as a seat of social power and as the embodiment of a cultural transformation that took shape in that space, a transformation spearheaded by the dominant social group, the Liberal elite. An analysis of the product built environment allows us to understand why the city grew in a determined manner: how the urban space became organized and how its infrastructure and services distributed. Although the emphasis is on the Liberal heyday from 1880-1930, this study also examines the history of the city since its origins in the late colonial period through its consolidation as a capital during the independent era, in order to characterize the nineteenth century colonial city that prevailed up to 1890 s. A diverse array of primary sources including official acts, memoirs, newspaper sources, maps and plans, photographs, and travelogues are used to study the initial phase of San Jose s urban growth. The investigation places the first period of modern urban growth at the turn of the nineteenth century within the prevailing ideological and political context of Positivism and Liberalism. The ideas of the city s elite regarding progress were translated into and reflected in the physical transformation of the city and in the social construction of space. Not only the transformations but also the limits and contradictions of the process of urban change are examined. At the same time, the reorganization of the city s physical space and the beginnings of the ensanche are studied. Hygiene as an engine of urban renovation is explored by studying the period s new public infrastructure (including pipelines, sewer systems, and the use of asphalt pavement) as part of the Saneamiento of San José. The modernization of public space is analyzed through a study of the first parks, boulevards and monuments and the emergence of a new urban culture prominently displayed in these green spaces. Parks and boulevards were new public and secular places of power within the modern city, used by the elite to display and educate the urban population into the new civic and secular traditions. The study goes on to explore the idealized image of the modern city through an analysis of European and North American travelogues and photography. The new esthetic of theatrical-spectacular representation of the modern city constructed a visual guide of how to understand and come to know the city. A partial and selective image of generalized urban change presented only the bourgeois facade and excluded everything that challenged the idea of progress. The enduring patterns of spatial and symbolic exclusion built into Costa Rica s capital city at the dawn of the twentieth century shed important light on the long-term political social and cultural processes that have created the troubled urban landscapes of contemporary Latin America.
Resumo:
The study is an examination of how the distant national past has been conceived and constructed for Finland from the mid-sixteenth century to the Second World War. The author argues that the perception and need of a national 'Golden Age' has undergone several phases during this period, yet the perceived Greatness of the Ancient Finns has been of great importance for the growth and development of the fundamental concepts of Finnish nationalism. It is a question reaching deeper than simply discussing the Kalevala or the Karelianism of the 1890s. Despite early occurrences of most of the topics the image-makers could utilize for the construction of an Ancient Greatness, a truly national proto-history only became a necessity after 1809, when a new conceptual 'Finnishness' was both conceived and brought forth in reality. In this process of nation-building, ethnic myths of origin and descent provided the core of the nationalist cause - the defence of a primordial national character - and within a few decades the antiquarian issue became a standard element of the nationalist public enlightenment. The emerging, archaeologically substantiated, nationhood was more than a scholarly construction: it was a 'politically correct' form of ethnic self-imaging, continuously adapting its message to contemporary society and modern progress. Prehistoric and medieval Finnishness became even more relevant for the intellectual defence of the nation during the period of Russian administrative pressure 1890-1905. With independence the origins of Finnishness were militarized even further, although the 'hot' phase of antiquarian nationalism ended, as many considered the Finnish state reestablished after centuries of 'dependency'. Nevertheless, the distant past of tribal Finnishness and the conceived Golden Age of the Kalevala remained obligating. The decline of public archaeology is quite evident after 1918, even though the national message of the antiquarian pursuits remained present in the history culture of the public. The myths, symbols, images, and constructs of ancient Finnishness had already become embedded in society by the turn of the century, like the patalakki cap, which remains a symbol of Finnishness to this day. The method of approach is one of combining a broad spectrum of previously neglected primary sources, all related to history culture and the subtle banalization of the distant past: school books, postcards, illustrations, festive costumes, drama, satirical magazines, novels, jewellery, and calendars. Tracing the origins of the national myths to their original contexts enables a rather thorough deconstruction of the proto-historical imaginary in this Finnish case study. Considering Anthony D. Smith's idea of ancient 'ethnies' being the basis for nationalist causes, the author considers such an approach in the Finnish case totally misplaced.
Resumo:
The study discusses the position of France as the United States’ ally in NATO in 1956-1958. The concrete position of France and the role that it was envisioned to have are being treated from the point of view of three participants of the Cold War: France, the United States and the Soviet Union. How did these different parties perceive the question and did these views change when the French Fourth Republic turned into the Fifth in 1958? The study is based on published French and American documents of Foreign Affairs. Because of problems with accessibility to the Soviet archival sources, the study uses reports on France-NATO relations of Pravda newspaper, the official organ of the Communist Party of the USSR, to provide information about how the Soviet side saw the question. Due to the nature and use of source material, and the chronological structure of the work, the study belongs methodologically to the research field of History of International Relations. As distinct from political scientists’ field of research, more prone to theorize, the study is characteristically a historical research, a work based on qualitative method and original sources that aims at creating a coherent narrative of the views expressed during the period covered by the study. France’s road to a full membership of NATO is being treated on the basis of research literature, after which discussions about France’s position in the Western Alliance are being chronologically traced for the period of last years of the Fourth Republic and the immediate months of coming back to power of Charles de Gaulle. Right from the spring of 1956 there can be seen aspirations of France, on one hand, to maintain her freedom of action inside the Western Alliance and, on the other, to widen the dialogue between the allies. The decision on France’s own nuclear deterrent was made already during the Fourth Republic, when it was thought to become part of NATO’s common defence. This was to change with de Gaulle. The USA felt that France still fancied herself as a great power and that she could not participate in full in NATO’s common defence because of her colonies. The Soviet Union saw the concrete position of France in the Alliance as in complete dependence on the USA, but her desired role was expressed largely in “Gaullist” terms. The expressions used by the General and the Soviet propaganda were close to each other, but the Soviet Union could not support de Gaulle without endangering the position of the French Communist Party. Between the Fourth and Fifth Republics no great rupture in content took place concerning the views of France’s role and position in the Western Alliance. The questions posed by de Gaulle had been expressed during the whole period of Fourth Republic’s existence. Instead, along with the General the weight and rhetoric of these questions saw a great change. Already in the early phase the Americans saw it possible that with de Gaulle, France would try to change her role. The rupture took place in the form of expression, rather than in its content.
Resumo:
In the 1990 s the companies utilizing and producing new information technology, especially so-called new media, were also expected to be forerunners in new forms of work and organization. Researchers anticipated that new, more creative forms of work and the changing content of working life were about to replace old industrial and standardized ways of working. However, research on actual companies in the IT sector revealed a situation where only minor changes to existing organizational forms were seen .Many of the independent companies faced great difficulties trying to survive the rapid changes in the products and production forms in the emerging field. Most of the research on the new media field has been conducted as surveys, and an understanding of the actual everyday work process has remained thin. My research is a longitudinal study of the early phases of one new media company in Finland. The study is an analysis of the challenges the company faced in a rapidly changing business field and the attempts to overcome these challenges. The two main analyses in the study focus on the developmental phases of the company and the disturbances in the production process. Based on these analyses, I study changes and learning at work using the methodological framework of developmental work research. Developmental work research is a Finnish variant of the cultural-historical activity theory applied to the study of learning and transformations at work. The data was gathered over a three-year period of ethnographic fieldwork. I documented the production processes and everyday life in the company as a participant observer. I interviewed key persons, video and audio-taped meetings, followed e-mail correspondence and collected various documents, such as agreements and memos. I developed a systematic method for analyzing the disturbances in the production process by combining the various data sources. The systematic analysis of the disturbances depicted a very complex and only partly managed production process. The production process had a long duration, and no single actor had an understanding of it as a whole. Most of the disturbances had to do with the customer relationships. The nature of the disturbances was latent; they were recognized but not addressed. In the particular production processes that I analyzed, the ending life span of a particular product, a CD-ROM, became obvious. This finding can be interpreted in relation to the developmental phase of the production and the transformation of the field as a whole. Based on the analysis of the developmental phases and the disturbances, I formulate a hypothesis of the contradictions and developmental potentials of the activity studied. The conclusions of the study challenge the existing understanding of how to conceptualize and study organizational learning in production work. Most theories of organizational learning do not address qualitative changes in production nor historical challenges of organizational learning itself. My study opens up a new horizon in understanding organizational learning in a rapidly changing field where a learning culture based on craft or mass production work is insufficient. There is a need for anticipatory and proactive organizational learning. Proactive learning is needed to anticipate the changes in production type, and the life cycles of products.
Resumo:
This study evaluates how the advection of precipitation, or wind drift, between the radar volume and ground affects radar measurements of precipitation. Normally precipitation is assumed to fall vertically to the ground from the contributing volume, and thus the radar measurement represents the geographical location immediately below. In this study radar measurements are corrected using hydrometeor trajectories calculated from measured and forecasted winds, and the effect of trajectory-correction on the radar measurements is evaluated. Wind drift statistics for Finland are compiled using sounding data from two weather stations spanning two years. For each sounding, the hydrometeor phase at ground level is estimated and drift distance calculated using different originating level heights. This way the drift statistics are constructed as a function of range from radar and elevation angle. On average, wind drift of 1 km was exceeded at approximately 60 km distance, while drift of 10 km was exceeded at 100 km distance. Trajectories were calculated using model winds in order to produce a trajectory-corrected ground field from radar PPI images. It was found that at the upwind side from the radar the effective measuring area was reduced as some trajectories exited the radar volume scan. In the downwind side areas near the edge of the radar measuring area experience improved precipitation detection. The effect of trajectory-correction is most prominent in instant measurements and diminishes when accumulating over longer time periods. Furthermore, measurements of intensive and small scale precipitation patterns benefit most from wind drift correction. The contribution of wind drift on the uncertainty of estimated Ze (S) - relationship was studied by simulating the effect of different error sources to the uncertainty in the relationship coefficients a and b. The overall uncertainty was assumed to consist of systematic errors of both the radar and the gauge, as well as errors by turbulence at the gauge orifice and by wind drift of precipitation. The focus of the analysis is error associated with wind drift, which was determined by describing the spatial structure of the reflectivity field using spatial autocovariance (or variogram). This spatial structure was then used with calculated drift distances to estimate the variance in radar measurement produced by precipitation drift, relative to the other error sources. It was found that error by wind drift was of similar magnitude with error by turbulence at gauge orifice at all ranges from radar, with systematic errors of the instruments being a minor issue. The correction method presented in the study could be used in radar nowcasting products to improve the estimation of visibility and local precipitation intensities. The method however only considers pure snow, and for operational purposes some improvements are desirable, such as melting layer detection, VPR correction and taking solid state hydrometeor type into account, which would improve the estimation of vertical velocities of the hydrometeors.
Resumo:
ALICE (A Large Ion Collider Experiment) is an experiment at CERN (European Organization for Nuclear Research), where a heavy-ion detector is dedicated to exploit the unique physics potential of nucleus-nucleus interactions at LHC (Large Hadron Collider) energies. In a part of that project, 716 so-called type V4 modules were assembles in Detector Laboratory of Helsinki Institute of Physics during the years 2004 - 2006. Altogether over a million detector strips has made this project the most massive particle detector project in the science history of Finland. One ALICE SSD module consists of a double-sided silicon sensor, two hybrids containing 12 HAL25 front end readout chips and some passive components, such has resistors and capacitors. The components are connected together by TAB (Tape Automated Bonding) microcables. The components of the modules were tested in every assembly phase with comparable electrical tests to ensure the reliable functioning of the detectors and to plot the possible problems. The components were accepted or rejected by the limits confirmed by ALICE collaboration. This study is concentrating on the test results of framed chips, hybrids and modules. The total yield of the framed chips is 90.8%, hybrids 96.1% and modules 86.2%. The individual test results have been investigated in the light of the known error sources that appeared during the project. After solving the problems appearing during the learning-curve of the project, the material problems, such as defected chip cables and sensors, seemed to induce the most of the assembly rejections. The problems were typically seen in tests as too many individual channel failures. Instead, the bonding failures rarely caused the rejections of any component. One sensor type among three different sensor manufacturers has proven to have lower quality than the others. The sensors of this manufacturer are very noisy and their depletion voltage are usually outside of the specification given to the manufacturers. Reaching 95% assembling yield during the module production demonstrates that the assembly process has been highly successful.
Resumo:
There is intense activity in the area of theoretical chemistry of gold. It is now possible to predict new molecular species, and more recently, solids by combining relativistic methodology with isoelectronic thinking. In this thesis we predict a series of solid sheet-type crystals for Group-11 cyanides, MCN (M=Cu, Ag, Au), and Group-2 and 12 carbides MC2 (M=Be-Ba, Zn-Hg). The idea of sheets is then extended to nanostrips which can be bent to nanorings. The bending energies and deformation frequencies can be systematized by treating these molecules as an elastic bodies. In these species Au atoms act as an 'intermolecular glue'. Further suggested molecular species are the new uncongested aurocarbons, and the neutral Au_nHg_m clusters. Many of the suggested species are expected to be stabilized by aurophilic interactions. We also estimate the MP2 basis-set limit of the aurophilicity for the model compounds [ClAuPH_3]_2 and [P(AuPH_3)_4]^+. Beside investigating the size of the basis-set applied, our research confirms that the 19-VE TZVP+2f level, used a decade ago, already produced 74 % of the present aurophilic attraction energy for the [ClAuPH_3]_2 dimer. Likewise we verify the preferred C4v structure for the [P(AuPH_3)_4]^+ cation at the MP2 level. We also perform the first calculation on model aurophilic systems using the SCS-MP2 method and compare the results to high-accuracy CCSD(T) ones. The recently obtained high-resolution microwave spectra on MCN molecules (M=Cu, Ag, Au) provide an excellent testing ground for quantum chemistry. MP2 or CCSD(T) calculations, correlating all 19 valence electrons of Au and including BSSE and SO corrections, are able to give bond lengths to 0.6 pm, or better. Our calculated vibrational frequencies are expected to be better than the currently available experimental estimates. Qualitative evidence for multiple Au-C bonding in triatomic AuCN is also found.