6 resultados para Electrical impedance tomography, Calderon problem, factorization method
em Helda - Digital Repository of University of Helsinki
Resumo:
Design embraces several disciplines dedicated to the production of artifacts and services. These disciplines are quite independent and only recently has psychological interest focused on them. Nowadays, the psychological theories of design, also called design cognition literature, describe the design process from the information processing viewpoint. These models co-exist with the normative standards of how designs should be crafted. In many places there are concrete discrepancies between these two in a way that resembles the differences between the actual and ideal decision-making. This study aimed to explore the possible difference related to problem decomposition. Decomposition is a standard component of human problem-solving models and is also included in the normative models of design. The idea of decomposition is to focus on a single aspect of the problem at a time. Despite its significance, the nature of decomposition in conceptual design is poorly understood and has only been preliminary investigated. This study addressed the status of decomposition in conceptual design of products using protocol analysis. Previous empirical investigations have argued that there are implicit and explicit decomposition, but have not provided a theoretical basis for these two. Therefore, the current research began by reviewing the problem solving and design literature and then composing a cognitive model of the solution search of conceptual design. The result is a synthetic view which describes recognition and decomposition as the basic schemata for conceptual design. A psychological experiment was conducted to explore decomposition. In the test, sixteen (N=16) senior students of mechanical engineering created concepts for two alternative tasks. The concurrent think-aloud method and protocol analysis were used to study decomposition. The results showed that despite the emphasis on decomposition in the formal education, only few designers (N=3) used decomposition explicitly and spontaneously in the presented tasks, although the designers in general applied a top-down control strategy. Instead, inferring from the use of structured strategies, the designers always relied on implicit decomposition. These results confirm the initial observations found in the literature, but they also suggest that decomposition should be investigated further. In the future, the benefits and possibilities of explicit decomposition should be considered along with the cognitive mechanisms behind decomposition. After that, the current results could be reinterpreted.
Resumo:
In dentistry, basic imaging techniques such as intraoral and panoramic radiography are in most cases the only imaging techniques required for the detection of pathology. Conventional intraoral radiographs provide images with sufficient information for most dental radiographic needs. Panoramic radiography produces a single image of both jaws, giving an excellent overview of oral hard tissues. Regardless of the technique, plain radiography has only a limited capability in the evaluation of three-dimensional (3D) relationships. Technological advances in radiological imaging have moved from two-dimensional (2D) projection radiography towards digital, 3D and interactive imaging applications. This has been achieved first by the use of conventional computed tomography (CT) and more recently by cone beam CT (CBCT). CBCT is a radiographic imaging method that allows accurate 3D imaging of hard tissues. CBCT has been used for dental and maxillofacial imaging for more than ten years and its availability and use are increasing continuously. However, at present, only best practice guidelines are available for its use, and the need for evidence-based guidelines on the use of CBCT in dentistry is widely recognized. We evaluated (i) retrospectively the use of CBCT in a dental practice, (ii) the accuracy and reproducibility of pre-implant linear measurements in CBCT and multislice CT (MSCT) in a cadaver study, (iii) prospectively the clinical reliability of CBCT as a preoperative imaging method for complicated impacted lower third molars, and (iv) the tissue and effective radiation doses and image quality of dental CBCT scanners in comparison with MSCT scanners in a phantom study. Using CBCT, subjective identification of anatomy and pathology relevant in dental practice can be readily achieved, but dental restorations may cause disturbing artefacts. CBCT examination offered additional radiographic information when compared with intraoral and panoramic radiographs. In terms of the accuracy and reliability of linear measurements in the posterior mandible, CBCT is comparable to MSCT. CBCT is a reliable means of determining the location of the inferior alveolar canal and its relationship to the roots of the lower third molar. CBCT scanners provided adequate image quality for dental and maxillofacial imaging while delivering considerably smaller effective doses to the patient than MSCT. The observed variations in patient dose and image quality emphasize the importance of optimizing the imaging parameters in both CBCT and MSCT.
Resumo:
The problem of recovering information from measurement data has already been studied for a long time. In the beginning, the methods were mostly empirical, but already towards the end of the sixties Backus and Gilbert started the development of mathematical methods for the interpretation of geophysical data. The problem of recovering information about a physical phenomenon from measurement data is an inverse problem. Throughout this work, the statistical inversion method is used to obtain a solution. Assuming that the measurement vector is a realization of fractional Brownian motion, the goal is to retrieve the amplitude and the Hurst parameter. We prove that under some conditions, the solution of the discretized problem coincides with the solution of the corresponding continuous problem as the number of observations tends to infinity. The measurement data is usually noisy, and we assume the data to be the sum of two vectors: the trend and the noise. Both vectors are supposed to be realizations of fractional Brownian motions, and the goal is to retrieve their parameters using the statistical inversion method. We prove a partial uniqueness of the solution. Moreover, with the support of numerical simulations, we show that in certain cases the solution is reliable and the reconstruction of the trend vector is quite accurate.
Resumo:
In this thesis we study a series of multi-user resource-sharing problems for the Internet, which involve distribution of a common resource among participants of multi-user systems (servers or networks). We study concurrently accessible resources, which for end-users may be exclusively accessible or non-exclusively. For all kinds we suggest a separate algorithm or a modification of common reputation scheme. Every algorithm or method is studied from different perspectives: optimality of protocols, selfishness of end users, fairness of the protocol for end users. On the one hand the multifaceted analysis allows us to select the most suited protocols among a set of various available ones based on trade-offs of optima criteria. On the other hand, the future Internet predictions dictate new rules for the optimality we should take into account and new properties of the networks that cannot be neglected anymore. In this thesis we have studied new protocols for such resource-sharing problems as the backoff protocol, defense mechanisms against Denial-of-Service, fairness and confidentiality for users in overlay networks. For backoff protocol we present analysis of a general backoff scheme, where an optimization is applied to a general-view backoff function. It leads to an optimality condition for backoff protocols in both slot times and continuous time models. Additionally we present an extension for the backoff scheme in order to achieve fairness for the participants in an unfair environment, such as wireless signal strengths. Finally, for the backoff algorithm we suggest a reputation scheme that deals with misbehaving nodes. For the next problem -- denial-of-service attacks, we suggest two schemes that deal with the malicious behavior for two conditions: forged identities and unspoofed identities. For the first one we suggest a novel most-knocked-first-served algorithm, while for the latter we apply a reputation mechanism in order to restrict resource access for misbehaving nodes. Finally, we study the reputation scheme for the overlays and peer-to-peer networks, where resource is not placed on a common station, but spread across the network. The theoretical analysis suggests what behavior will be selected by the end station under such a reputation mechanism.
Resumo:
The aim of this thesis was to study the seismic tomography structure of the earth s crust together with earthquake distribution and mechanism beneath the central Fennoscandian Shield, mainly in southern and central Finland. The earthquake foci and some fault plane solutions are correlated with 3-D images of the velocity tomography. The results are discussed in relation to the stress field of the Shield and with other geophysical, e.g. geomagnetic, gravimetric, tectonic, and anisotropy studies of the Shield. The earthquake data of the Fennoscandian Shield has been extracted from the Nordic earthquake parameter data base which was founded at the time of inception of the earthquake catalogue for northern Europe. Eight earlier earthquake source mechanisms are included in a pilot study on creating a novel technique for calculating an earthquake fault plane solution. Altogether, eleven source mechanisms of shallow, weak earthquakes are related in the 3-D tomography model to trace stresses of the crust in southern and central Finland. The earthquakes in the eastern part of the Fennoscandian Shield represent low-active, intraplate seismicity. Earthquake mechanisms with NW-SE oriented horizontal compression confirm that the dominant stress field originates from the ridge-push force in the North Atlantic Ocean. Earthquakes accumulate in coastal areas, in intersections of tectonic lineaments, in main fault zones or are bordered by fault lines. The majority of Fennoscandian earthquakes concentrate on the south-western Shield in southern Norway and Sweden. Onwards, epicentres spread via the ridge of the Shield along the west-coast of the Gulf of Bothnia northwards along the Tornio River - Finnmark fault system to the Barents Sea, and branch out north-eastwards via the Kuusamo region to the White Sea Kola Peninsula faults. The local seismic tomographic method was applied to find the terrane distribution within the central parts of the Shield the Svecofennian Orogen. From 300 local explosions a total of 19765 crustal Pg- and Sg-wave arrival times were inverted to create independent 3-D Vp and Vs tomographic models, from which the Vp/Vs ratio was calculated. The 3-D structure of the crust is presented as a P-wave and for the first time as an S-wave velocity model, and also as a Vp/Vs-ratio model of the SVEKALAPKO area that covers 700x800 km2 in southern and central Finland. Also, some P-wave Moho-reflection data was interpolated to image the relief of the crust-mantle boundary (i.e. Moho). In the tomography model, the seismic velocities vary smoothly. The lateral variations are larger for Vp (dVp =0.7 km/s) than for Vs (dVs =0.4 km/s). The Vp/Vs ratio varies spatially more distinctly than P- and S-wave velocities, usually from 1.70 to 1.74 in the upper crust and from 1.72 to 1.78 in the lower crust. Schist belts and their continuations at depth are associated with lower velocities and lower Vp/Vs ratios than in the granitoid areas. The tomography modelling suggests that the Svecofennian Orogen was accreted from crustal blocks ranging in size from 100x100 km2 to 200x200 km2 in cross-sectional area. The intervening sedimentary belts have ca. 0.2 km/s lower P- and S-wave velocities and ca. 0.04 lower Vp/Vs ratios. Thus, the tomographic model supports the concept that the thick Svecofennian crust was accreted from several crustal terranes, some hidden, and that the crust was later modified by intra- and underplating. In conclusion, as a novel approach the earthquake focal mechanism and focal depth distribution is discussed in relation to the 3-D tomography model. The schist belts and the transformation zones between the high- and low-velocity anomaly blocks are characterized by deeper earthquakes than the granitoid areas where shallow events dominate. Although only a few focal mechanisms were solved for southern Finland, there is a trend towards strike-slip and oblique strike-slip movements inside schist areas. The normal dip-slip type earthquakes are typical in the seismically active Kuusamo district in the NE edge of the SVEKALAPKO area, where the Archean crust is ca. 15-20 km thinner than the Proterozoic Svecofennian crust. Two near vertical dip-slip mechanism earthquakes occurred in the NE-SW junction between the Central Finland Granitoid Complex and the Vyborg rapakivi batholith, where high Vp/Vs-ratio deep-set intrusion splits the southern Finland schist belt into two parts in the tomography model.
Resumo:
Neurons can be divided into various classes according to their location, morphology, neurochemical identity and electrical properties. They form complex interconnected networks with precise roles for each cell type. GABAergic neurons expressing the calcium-binding protein parvalbumin (Pv) are mainly interneurons, which serve a coordinating function. Pv-cells modulate the activity of principal cells with high temporal precision. Abnormalities of Pv-interneuron activity in cortical areas have been linked to neuropsychiatric illnesses such as schizophrenia. Cerebellar Purkinje cells are known to be central to motor learning. They are the sole output from the layered cerebellar cortex to deep cerebellar nuclei. There are still many open questions about the precise role of Pv-neurons and Purkinje cells, many of which could be answered if one could achieve rapid, reversible cell-type specific modulation of the activity of these neurons and observe the subsequent changes at the whole-animal level. The aim of these studies was to develop a novel method for the modulation of Pv-neurons and Purkinje cells in vivo and to use this method to investigate the significance of inhibition in these neuronal types with a variety of behavioral experiments in addition to tissue autoradiography, electrophysiology and immunohistochemistry. The GABA(A) receptor γ2 subunit was ablated from Pv-neurons and Purkinje cells in four separate mouse lines. Pv-Δγ2 mice had wide-ranging behavioral alterations and increased GABA-insensitive binding indicative of an altered GABA(A) receptor composition, particularly in midbrain areas. PC-Δγ2 mice experienced little or no motor impairment despite the lack of inhibition in Purkinje cells. In Pv-Δγ2-partial rescue mice, a reversal of motor and cognitive deficits was observed in addition to restoration of the wild-type γ2F77 subunit to the reticular nucleus of thalamus and the cerebellar molecular layer. In PC-Δγ2-swap mice, zolpidem sensitivity was restored to Purkinje cells and the administration of systemic zolpidem evoked a transient motor impairment. On the basis of these results, it is concluded that this new method of cell-type specific modulation is a feasible way to modulate the activity of selected neuronal types. The importance of Purkinje cells to motor control supports previous studies, and the crucial involvement of Pv-neurons in a range of behavioral modalities is confirmed.