19 resultados para computer tomography
em Helda - Digital Repository of University of Helsinki
Resumo:
Distraction in the workplace is increasingly more common in the information age. Several tasks and sources of information compete for a worker's limited cognitive capacities in human-computer interaction (HCI). In some situations even very brief interruptions can have detrimental effects on memory. Nevertheless, in other situations where persons are continuously interrupted, virtually no interruption costs emerge. This dissertation attempts to reveal the mental conditions and causalities differentiating the two outcomes. The explanation, building on the theory of long-term working memory (LTWM; Ericsson and Kintsch, 1995), focuses on the active, skillful aspects of human cognition that enable the storage of task information beyond the temporary and unstable storage provided by short-term working memory (STWM). Its key postulate is called a retrieval structure an abstract, hierarchical knowledge representation built into long-term memory that can be utilized to encode, update, and retrieve products of cognitive processes carried out during skilled task performance. If certain criteria of practice and task processing are met, LTWM allows for the storage of large representations for long time periods, yet these representations can be accessed with the accuracy, reliability, and speed typical of STWM. The main thesis of the dissertation is that the ability to endure interruptions depends on the efficiency in which LTWM can be recruited for maintaing information. An observational study and a field experiment provide ecological evidence for this thesis. Mobile users were found to be able to carry out heavy interleaving and sequencing of tasks while interacting, and they exhibited several intricate time-sharing strategies to orchestrate interruptions in a way sensitive to both external and internal demands. Interruptions are inevitable, because they arise as natural consequences of the top-down and bottom-up control of multitasking. In this process the function of LTWM is to keep some representations ready for reactivation and others in a more passive state to prevent interference. The psychological reality of the main thesis received confirmatory evidence in a series of laboratory experiments. They indicate that after encoding into LTWM, task representations are safeguarded from interruptions, regardless of their intensity, complexity, or pacing. However, when LTWM cannot be deployed, the problems posed by interference in long-term memory and the limited capacity of the STWM surface. A major contribution of the dissertation is the analysis of when users must resort to poorer maintenance strategies, like temporal cues and STWM-based rehearsal. First, one experiment showed that task orientations can be associated with radically different patterns of retrieval cue encodings. Thus the nature of the processing of the interface determines which features will be available as retrieval cues and which must be maintained by other means. In another study it was demonstrated that if the speed of encoding into LTWM, a skill-dependent parameter, is slower than the processing speed allowed for by the task, interruption costs emerge. Contrary to the predictions of competing theories, these costs turned out to involve intrusions in addition to omissions. Finally, it was learned that in rapid visually oriented interaction, perceptual-procedural expectations guide task resumption, and neither STWM nor LTWM are utilized due to the fact that access is too slow. These findings imply a change in thinking about the design of interfaces. Several novel principles of design are presented, basing on the idea of supporting the deployment of LTWM in the main task.
Resumo:
The present study examined how personality and social psychological factors affect third and fourth graders' computer-mediated communication. Personality was analysed in terms of the following strategies: optimism, pessimism and defensive pessimism. Students worked either individually or in dyads which were paired homogeneously or heterogeneously according to the strategies. Moreover, the present study compared horizontal and vertical interaction. The study also examined the role that popularity plays, and students were divided into groups based on their popularity level. The results show that an optimistic strategy is useful. Optimism was found to be related to the active production and processing of ideas. Although previous research has identified drawbacks to pessimism in achievement settings, this study shows that the pessimistic strategy is not as debilitating a strategy as is usually assumed. Pessimistic students were able to process their ideas. However, defensive pessimists were somewhat cautious in introducing or changing ideas. Heterogeneous dyads were not beneficial configurations with respect to producing, introducing, or changing ideas. Moreover, many differences were found to exist between the horizontal and vertical interaction; specifically, the students expressed more opinions and feelings when teachers took no part in the discussions. Strong emotions were observed especially in the horizontal interaction. Further, group working skills were found to be more important for boys than for girls, while rejected students were not at a disadvantage compared to popular ones. Schools can encourage emotional and social learning. The present study shows that students can use computers to express their feelings. In addition, students who are unpopular in non-computer contexts or students who use pessimism can benefit from computers. Participation in computer discussions can give unpopular children a chance to develop confidence when relating to peers.
Resumo:
In dentistry, basic imaging techniques such as intraoral and panoramic radiography are in most cases the only imaging techniques required for the detection of pathology. Conventional intraoral radiographs provide images with sufficient information for most dental radiographic needs. Panoramic radiography produces a single image of both jaws, giving an excellent overview of oral hard tissues. Regardless of the technique, plain radiography has only a limited capability in the evaluation of three-dimensional (3D) relationships. Technological advances in radiological imaging have moved from two-dimensional (2D) projection radiography towards digital, 3D and interactive imaging applications. This has been achieved first by the use of conventional computed tomography (CT) and more recently by cone beam CT (CBCT). CBCT is a radiographic imaging method that allows accurate 3D imaging of hard tissues. CBCT has been used for dental and maxillofacial imaging for more than ten years and its availability and use are increasing continuously. However, at present, only best practice guidelines are available for its use, and the need for evidence-based guidelines on the use of CBCT in dentistry is widely recognized. We evaluated (i) retrospectively the use of CBCT in a dental practice, (ii) the accuracy and reproducibility of pre-implant linear measurements in CBCT and multislice CT (MSCT) in a cadaver study, (iii) prospectively the clinical reliability of CBCT as a preoperative imaging method for complicated impacted lower third molars, and (iv) the tissue and effective radiation doses and image quality of dental CBCT scanners in comparison with MSCT scanners in a phantom study. Using CBCT, subjective identification of anatomy and pathology relevant in dental practice can be readily achieved, but dental restorations may cause disturbing artefacts. CBCT examination offered additional radiographic information when compared with intraoral and panoramic radiographs. In terms of the accuracy and reliability of linear measurements in the posterior mandible, CBCT is comparable to MSCT. CBCT is a reliable means of determining the location of the inferior alveolar canal and its relationship to the roots of the lower third molar. CBCT scanners provided adequate image quality for dental and maxillofacial imaging while delivering considerably smaller effective doses to the patient than MSCT. The observed variations in patient dose and image quality emphasize the importance of optimizing the imaging parameters in both CBCT and MSCT.
Resumo:
Technological development of fast multi-sectional, helical computed tomography (CT) scanners has allowed computed tomography perfusion (CTp) and angiography (CTA) in evaluating acute ischemic stroke. This study focuses on new multidetector computed tomography techniques, namely whole-brain and first-pass CT perfusion plus CTA of carotid arteries. Whole-brain CTp data is acquired during slow infusion of contrast material to achieve constant contrast concentration in the cerebral vasculature. From these data quantitative maps are constructed of perfused cerebral blood volume (pCBV). The probability curve of cerebral infarction as a function of normalized pCBV was determined in patients with acute ischemic stroke. Normalized pCBV, expressed as a percentage of contralateral normal brain pCBV, was determined in the infarction core and in regions just inside and outside the boundary between infarcted and noninfarcted brain. Corresponding probabilities of infarction were 0.99, 0.96, and 0.11, R² was 0.73, and differences in perfusion between core and inner and outer bands were highly significant. Thus a probability of infarction curve can help predict the likelihood of infarction as a function of percentage normalized pCBV. First-pass CT perfusion is based on continuous cine imaging over a selected brain area during a bolus injection of contrast. During its first passage, contrast material compartmentalizes in the intravascular space, resulting in transient tissue enhancement. Functional maps such as cerebral blood flow (CBF), and volume (CBV), and mean transit time (MTT) are then constructed. We compared the effects of three different iodine concentrations (300, 350, or 400 mg/mL) on peak enhancement of normal brain tissue and artery and vein, stratified by region-of-interest (ROI) location, in 102 patients within 3 hours of stroke onset. A monotonic increasing peak opacification was evident at all ROI locations, suggesting that CTp evaluation of patients with acute stroke is best performed with the highest available concentration of contrast agent. In another study we investigated whether lesion volumes on CBV, CBF, and MTT maps within 3 hours of stroke onset predict final infarct volume, and whether all these parameters are needed for triage to intravenous recombinant tissue plasminogen activator (IV-rtPA). The effect of IV-rtPA on the affected brain by measuring salvaged tissue volume in patients receiving IV-rtPA and in controls was investigated also. CBV lesion volume did not necessarily represent dead tissue. MTT lesion volume alone can serve to identify the upper size limit of the abnormally perfused brain, and those with IV-rtPA salvaged more brain than did controls. Carotid CTA was compared with carotid DSA in grading of stenosis in patients with stroke symptoms. In CTA, the grade of stenosis was determined by means of axial source and maximum intensity projection (MIP) images as well as a semiautomatic vessel analysis. CTA provides an adequate, less invasive alternative to conventional DSA, although tending to underestimate clinically relevant grades of stenosis.
Resumo:
Fusion power is an appealing source of clean and abundant energy. The radiation resistance of reactor materials is one of the greatest obstacles on the path towards commercial fusion power. These materials are subject to a harsh radiation environment, and cannot fail mechanically or contaminate the fusion plasma. Moreover, for a power plant to be economically viable, the reactor materials must withstand long operation times, with little maintenance. The fusion reactor materials will contain hydrogen and helium, due to deposition from the plasma and nuclear reactions because of energetic neutron irradiation. The first wall divertor materials, carbon and tungsten in existing and planned test reactors, will be subject to intense bombardment of low energy deuterium and helium, which erodes and modifies the surface. All reactor materials, including the structural steel, will suffer irradiation of high energy neutrons, causing displacement cascade damage. Molecular dynamics simulation is a valuable tool for studying irradiation phenomena, such as surface bombardment and the onset of primary damage due to displacement cascades. The governing mechanisms are on the atomic level, and hence not easily studied experimentally. In order to model materials, interatomic potentials are needed to describe the interaction between the atoms. In this thesis, new interatomic potentials were developed for the tungsten-carbon-hydrogen system and for iron-helium and chromium-helium. Thus, the study of previously inaccessible systems was made possible, in particular the effect of H and He on radiation damage. The potentials were based on experimental and ab initio data from the literature, as well as density-functional theory calculations performed in this work. As a model for ferritic steel, iron-chromium with 10% Cr was studied. The difference between Fe and FeCr was shown to be negligible for threshold displacement energies. The properties of small He and He-vacancy clusters in Fe and FeCr were also investigated. The clusters were found to be more mobile and dissociate more rapidly than previously assumed, and the effect of Cr was small. The primary damage formed by displacement cascades was found to be heavily influenced by the presence of He, both in FeCr and W. Many important issues with fusion reactor materials remain poorly understood, and will require a huge effort by the international community. The development of potential models for new materials and the simulations performed in this thesis reveal many interesting features, but also serve as a platform for further studies.
Resumo:
The aim of this thesis was to study the seismic tomography structure of the earth s crust together with earthquake distribution and mechanism beneath the central Fennoscandian Shield, mainly in southern and central Finland. The earthquake foci and some fault plane solutions are correlated with 3-D images of the velocity tomography. The results are discussed in relation to the stress field of the Shield and with other geophysical, e.g. geomagnetic, gravimetric, tectonic, and anisotropy studies of the Shield. The earthquake data of the Fennoscandian Shield has been extracted from the Nordic earthquake parameter data base which was founded at the time of inception of the earthquake catalogue for northern Europe. Eight earlier earthquake source mechanisms are included in a pilot study on creating a novel technique for calculating an earthquake fault plane solution. Altogether, eleven source mechanisms of shallow, weak earthquakes are related in the 3-D tomography model to trace stresses of the crust in southern and central Finland. The earthquakes in the eastern part of the Fennoscandian Shield represent low-active, intraplate seismicity. Earthquake mechanisms with NW-SE oriented horizontal compression confirm that the dominant stress field originates from the ridge-push force in the North Atlantic Ocean. Earthquakes accumulate in coastal areas, in intersections of tectonic lineaments, in main fault zones or are bordered by fault lines. The majority of Fennoscandian earthquakes concentrate on the south-western Shield in southern Norway and Sweden. Onwards, epicentres spread via the ridge of the Shield along the west-coast of the Gulf of Bothnia northwards along the Tornio River - Finnmark fault system to the Barents Sea, and branch out north-eastwards via the Kuusamo region to the White Sea Kola Peninsula faults. The local seismic tomographic method was applied to find the terrane distribution within the central parts of the Shield the Svecofennian Orogen. From 300 local explosions a total of 19765 crustal Pg- and Sg-wave arrival times were inverted to create independent 3-D Vp and Vs tomographic models, from which the Vp/Vs ratio was calculated. The 3-D structure of the crust is presented as a P-wave and for the first time as an S-wave velocity model, and also as a Vp/Vs-ratio model of the SVEKALAPKO area that covers 700x800 km2 in southern and central Finland. Also, some P-wave Moho-reflection data was interpolated to image the relief of the crust-mantle boundary (i.e. Moho). In the tomography model, the seismic velocities vary smoothly. The lateral variations are larger for Vp (dVp =0.7 km/s) than for Vs (dVs =0.4 km/s). The Vp/Vs ratio varies spatially more distinctly than P- and S-wave velocities, usually from 1.70 to 1.74 in the upper crust and from 1.72 to 1.78 in the lower crust. Schist belts and their continuations at depth are associated with lower velocities and lower Vp/Vs ratios than in the granitoid areas. The tomography modelling suggests that the Svecofennian Orogen was accreted from crustal blocks ranging in size from 100x100 km2 to 200x200 km2 in cross-sectional area. The intervening sedimentary belts have ca. 0.2 km/s lower P- and S-wave velocities and ca. 0.04 lower Vp/Vs ratios. Thus, the tomographic model supports the concept that the thick Svecofennian crust was accreted from several crustal terranes, some hidden, and that the crust was later modified by intra- and underplating. In conclusion, as a novel approach the earthquake focal mechanism and focal depth distribution is discussed in relation to the 3-D tomography model. The schist belts and the transformation zones between the high- and low-velocity anomaly blocks are characterized by deeper earthquakes than the granitoid areas where shallow events dominate. Although only a few focal mechanisms were solved for southern Finland, there is a trend towards strike-slip and oblique strike-slip movements inside schist areas. The normal dip-slip type earthquakes are typical in the seismically active Kuusamo district in the NE edge of the SVEKALAPKO area, where the Archean crust is ca. 15-20 km thinner than the Proterozoic Svecofennian crust. Two near vertical dip-slip mechanism earthquakes occurred in the NE-SW junction between the Central Finland Granitoid Complex and the Vyborg rapakivi batholith, where high Vp/Vs-ratio deep-set intrusion splits the southern Finland schist belt into two parts in the tomography model.
Resumo:
Layering is a widely used method for structuring data in CAD-models. During the last few years national standardisation organisations, professional associations, user groups for particular CAD-systems, individual companies etc. have issued numerous standards and guidelines for the naming and structuring of layers in building design. In order to increase the integration of CAD data in the industry as a whole ISO recently decided to define an international standard for layer usage. The resulting standard proposal, ISO 13567, is a rather complex framework standard which strives to be more of a union than the least common denominator of the capabilities of existing guidelines. A number of principles have been followed in the design of the proposal. The first one is the separation of the conceptual organisation of information (semantics) from the way this information is coded (syntax). The second one is orthogonality - the fact that many ways of classifying information are independent of each other and can be applied in combinations. The third overriding principle is the reuse of existing national or international standards whenever appropriate. The fourth principle allows users to apply well-defined subsets of the overall superset of possible layernames. This article describes the semantic organisation of the standard proposal as well as its default syntax. Important information categories deal with the party responsible for the information, the type of building element shown, whether a layer contains the direct graphical description of a building part or additional information needed in an output drawing etc. Non-mandatory information categories facilitate the structuring of information in rebuilding projects, use of layers for spatial grouping in large multi-storey projects, and storing multiple representations intended for different drawing scales in the same model. Pilot testing of ISO 13567 is currently being carried out in a number of countries which have been involved in the definition of the standard. In the article two implementations, which have been carried out independently in Sweden and Finland, are described. The article concludes with a discussion of the benefits and possible drawbacks of the standard. Incremental development within the industry, (where ”best practice” can become ”common practice” via a standard such as ISO 13567), is contrasted with the more idealistic scenario of building product models. The relationship between CAD-layering, document management product modelling and building element classification is also discussed.
Resumo:
In smaller countries where the key players in construction IT development tend to know each other personally and where public R&D funding is concentrated to a few channels, IT roadmaps and strategies would seem to have a better chance of influencing development than in the bigger industrial countries. In this paper Finland and the RATAS-project is presented as a historical case illustrating such impact. RATAS was initiated as a construction IT roadmap project in 1985, involving many of the key organisations and companies active in construction sector development. Several of the individuals who took an active part in the project have played an important role in later developments both in Finland and on the international scene. The central result of RATAS was the identification of what is nowadays called Building Information Modelling (BIM) technology as the central issue in getting IT into efficient use in the construction sector. BIM, which earlier was referred to as building product modelling, has been a key ingredient in many roadmaps since and the subject of international standardisation efforts such as STEP and IAI/IFCs. The RATAS project can in hindsight be seen as a forerunner with an impact which also transcended national borders.