884 resultados para unified theories and models of strong and electroweak
Resumo:
Pliocene and Pleistocene sediments of the Oman margin and Owen Ridge are characterized by continuous alternation of light and dark layers of nannofossil ooze and marly nannofossil ooze and cyclic variation of wet-bulk density. Origin of the wet-bulk density and color cycles was examined at Ocean Drilling Program Site 722 on the Owen Ridge and Site 728 on the Oman margin using 3.4-m.y.-long GRAPE (gamma ray attenuation) wet-bulk density records and records of sediment color represented as changes in gray level on black-and-white core photographs. At Sites 722 and 728 sediments display a weak correlation of decreasing wet-bulk density with increasing darkness of sediment color. Wet-bulk density is inversely related to organic carbon concentration and displays little relation to calcium carbonate concentration, which varies inversely with the abundance of terrigenous sediment components. Sediment color darkens with increasing terrigenous sediment abundance (decreasing carbonate content) and with increasing organic carbon concentration. Upper Pleistocene sediments at Site 722 display a regular pattern of dark colored intervals coinciding with glacial periods, whereas at Site 728 the pattern of color variation is more irregular. There is not a consistent relationship between the dark intervals and their relative wet-bulk density in the upper Pleistocene sections at Sites 722 and 728, suggesting that dominance of organic matter or terrigenous sediment as primary coloring agents varies. Spectra of wet-bulk density and optical density time series display concentration of variance at orbital periodicities of 100, 41, 23, and 19 k.y. A strong 41-k.y. periodicity characterizes wet-bulk density and optical density variation at both sites throughout most of the past 3.4 m.y. Cyclicity at the 41-k.y. periodicity is characterized by a lack of coherence between wet-bulk density and optical density suggesting that the bulk density and color cycles reflect the mixed influence of varying abundance of terrigenous sediments and organic matter. The 23-k.y. periodicity in wet-bulk density and sediment color cycles is generally characterized by significant coherence between wet-bulk density and optical density, which reflects an inverse relationship between these parameters. Varying organic matter abundance, associated with changes in productivity or preservation, is inferred to more strongly influence changes in wet-bulk density and sediment color at this periodicity.
Resumo:
En los últimos años ha habido una fuerte tendencia a disminuir las emisiones de CO2 y su negativo impacto medioambiental. En la industria del transporte, reducir el peso de los vehículos aparece como la mejor opción para alcanzar este objetivo. Las aleaciones de Mg constituyen un material con gran potencial para el ahorro de peso. Durante la última década se han realizado muchos esfuerzos encaminados a entender los mecanismos de deformación que gobiernan la plasticidad de estos materiales y así, las aleaciones de Mg de colada inyectadas a alta presión y forjadas son todavía objeto de intensas campañas de investigación. Es ahora necesario desarrollar modelos que contemplen la complejidad inherente de los procesos de deformación de éstos. Esta tesis doctoral constituye un intento de entender mejor la relación entre la microestructura y el comportamiento mecánico de aleaciones de Mg, y dará como resultado modelos de policristales capaces de predecir propiedades macro- y microscópicas. La deformación plástica de las aleaciones de Mg está gobernada por una combinación de mecanismos de deformación característicos de la estructura cristalina hexagonal, que incluye el deslizamiento cristalográfico en planos basales, prismáticos y piramidales, así como el maclado. Las aleaciones de Mg de forja presentan texturas fuertes y por tanto los mecanismos de deformación activos dependen de la orientación de la carga aplicada. En este trabajo se ha desarrollado un modelo de plasticidad cristalina por elementos finitos con el objetivo de entender el comportamiento macro- y micromecánico de la aleación de Mg laminada AZ31 (Mg-3wt.%Al-1wt.%Zn). Este modelo, que incorpora el maclado y tiene en cuenta el endurecimiento por deformación debido a las interacciones dislocación-dislocación, dislocación-macla y macla-macla, predice exitosamente las actividades de los distintos mecanismos de deformación y la evolución de la textura con la deformación. Además, se ha llevado a cabo un estudio que combina difracción de electrones retrodispersados en tres dimensiones y modelización para investigar el efecto de los límites de grano en la propagación del maclado en el mismo material. Ambos, experimentos y simulaciones, confirman que el ángulo de desorientación tiene una influencia decisiva en la propagación del maclado. Se ha observado que los efectos no-Schmid, esto es, eventos de deformación plástica que no cumplen la ley de Schmid con respecto a la carga aplicada, no tienen lugar en la vecindad de los límites de baja desorientación y se hacen más frecuentes a medida que la desorientación aumenta. Esta investigación también prueba que la morfología de las maclas está altamente influenciada por su factor de Schmid. Es conocido que los procesos de colada suelen dar lugar a la formación de microestructuras con una microporosidad elevada, lo cuál afecta negativamente a sus propiedades mecánicas. La aplicación de presión hidrostática después de la colada puede reducir la porosidad y mejorar las propiedades aunque es poco conocido su efecto en el tamaño y morfología de los poros. En este trabajo se ha utilizado un enfoque mixto experimentalcomputacional, basado en tomografía de rayos X, análisis de imagen y análisis por elementos finitos, para la determinación de la distribución tridimensional (3D) de la porosidad y de la evolución de ésta con la presión hidrostática en la aleación de Mg AZ91 (Mg- 9wt.%Al-1wt.%Zn) colada por inyección a alta presión. La distribución real de los poros en 3D obtenida por tomografía se utilizó como input para las simulaciones por elementos finitos. Los resultados revelan que la aplicación de presión tiene una influencia significativa tanto en el cambio de volumen como en el cambio de forma de los poros que han sido cuantificados con precisión. Se ha observado que la reducción del tamaño de éstos está íntimamente ligada con su volumen inicial. En conclusión, el modelo de plasticidad cristalina propuesto en este trabajo describe con éxito los mecanismos intrínsecos de la deformación de las aleaciones de Mg a escalas meso- y microscópica. Más especificamente, es capaz de capturar las activadades del deslizamiento cristalográfico y maclado, sus interacciones, así como los efectos en la porosidad derivados de los procesos de colada. ---ABSTRACT--- The last few years have seen a growing effort to reduce CO2 emissions and their negative environmental impact. In the transport industry more specifically, vehicle weight reduction appears as the most straightforward option to achieve this objective. To this end, Mg alloys constitute a significant weight saving material alternative. Many efforts have been devoted over the last decade to understand the main mechanisms governing the plasticity of these materials and, despite being already widely used, high pressure die-casting and wrought Mg alloys are still the subject of intense research campaigns. Developing models that can contemplate the complexity inherent to the deformation of Mg alloys is now timely. This PhD thesis constitutes an attempt to better understand the relationship between the microstructure and the mechanical behavior of Mg alloys, as it will result in the design of polycrystalline models that successfully predict macro- and microscopic properties. Plastic deformation of Mg alloys is driven by a combination of deformation mechanisms specific to their hexagonal crystal structure, namely, basal, prismatic and pyramidal dislocation slip as well as twinning. Wrought Mg alloys present strong textures and thus specific deformation mechanisms are preferentially activated depending on the orientation of the applied load. In this work a crystal plasticity finite element model has been developed in order to understand the macro- and micromechanical behavior of a rolled Mg AZ31 alloy (Mg-3wt.%Al-1wt.%Zn). The model includes twinning and accounts for slip-slip, slip-twin and twin-twin hardening interactions. Upon calibration and validation against experiments, the model successfully predicts the activity of the various deformation mechanisms and the evolution of the texture at different deformation stages. Furthermore, a combined three-dimensional electron backscatter diffraction and modeling approach has been adopted to investigate the effect of grain boundaries on twin propagation in the same material. Both experiments and simulations confirm that the misorientation angle has a critical influence on twin propagation. Non-Schmid effects, i.e. plastic deformation events that do not comply with the Schmid law with respect to the applied stress, are absent in the vicinity of low misorientation boundaries and become more abundant as misorientation angle increases. This research also proves that twin morphology is highly influenced by the Schmid factor. Finally, casting processes usually lead to the formation of significant amounts of gas and shrinkage microporosity, which adversely affect the mechanical properties. The application of hydrostatic pressure after casting can reduce the porosity and improve the properties but little is known about the effects on the casting’s pores size and morphology. In this work, an experimental-computational approach based on X-ray computed tomography, image analysis and finite element analysis is utilized for the determination of the 3D porosity distribution and its evolution with hydrostatic pressure in a high pressure diecast Mg AZ91 alloy (Mg-9wt.%Al-1wt.%Zn). The real 3D pore distribution obtained by tomography is used as input for the finite element simulations using an isotropic hardening law. The model is calibrated and validated against experimental stress-strain curves. The results reveal that the pressure treatment has a significant influence both on the volume and shape changes of individuals pores, which have been precisely quantified, and which are found to be related to the initial pore volume. In conclusion, the crystal plasticity model proposed in this work successfully describes the intrinsic deformation mechanisms of Mg alloys both at the mesoscale and the microscale. More specifically, it can capture slip and twin activities, their interactions, as well as the potential porosity effects arising from casting processes.
Resumo:
I will start by discussing some aspects of Kagitcibasi’s Theory of Family Change: its current empirical status and, more importantly, its focus on universal human needs and the consequences of this focus. Family Change Theory’s focus on the universality of the basic human needs of autonomy and relatedness and its culture-level emphasis on cultural norms and family values as reflecting a culture’s capacity for fulfilling its members’ respective needs shows that the theory advocates balanced cultural norms of independence and interdependence. As a normative theory it therefore postulates the necessity of a synthetic family model of emotional interdependence as an alternative to extreme models of total independence and total interdependence. Generalizing from this I will sketch a theoretical model where a dynamic and dialectical process of the fit between individual and culture and between culture and universal human needs and related social practices is central. I will discuss this model using a recent cross-cultural project on implicit theories of self/world and primary/secondary control orientations as an example. Implications for migrating families and acculturating individuals are also discussed.
Resumo:
The leadership categorisation theory suggests that followers rely on a hierarchical cognitive structure in perceiving leaders and the leadership process, which consists of three levels; superordinate, basic and subordinate. The predominant view is that followers rely on Implicit Leadership Theories (ILTs) at the basic level in making judgments about managers. The thesis examines whether this presumption is true by proposing and testing two competing conceptualisations; namely the congruence between the basic level ILTs (general leader) and actual manager perceptions, and subordinate level ILTs (job-specific leader) and actual manager. The conceptualisation at the job-specific level builds on context-related assertions of the ILT explanatory models: leadership categorisation, information processing and connectionist network theories. Further, the thesis addresses the effects of ILT congruence at the group level. The hypothesised model suggests that Leader-Member Exchange (LMX) will act as a mediator between ILT congruence and outcomes. Three studies examined the proposed model. The first was cross-sectional with 175 students reporting on work experience during a 1-year industrial placement. The second was longitudinal and had a sample of 343 students engaging in a business simulation in groups with formal leadership. The final study was a cross-sectional survey in several organisations with a sample of 178. A novel approach was taken to congruence analysis; the hypothesised models were tested using Latent Congruence Modelling (LCM), which accounts for measurement error and overcomes the majority of limitations of traditional approaches. The first two studies confirm the traditional theorised view that employees rely on basic-level ILTs in making judgments about their managers with important implications, and show that LMX mediates the relationship between ILT congruence and work-related outcomes (performance, job satisfaction, well-being, task satisfaction, intragroup conflict, group satisfaction, team realness, team-member exchange, group performance). The third study confirms this with conflict, well-being, self-rated performance and commitment as outcomes.
Resumo:
The notion model of development and distribution of software (MDDS) is introduced and its role for the efficiency of the software products is stressed. Two classical MDDS are presented and some attempts to adapt them to the contemporary trends in web-based software design are described. Advantages and shortcomings of the obtained models are outlined. In conclusion the desired features of a better MDDS for web-based solutions are given.
Resumo:
A modernkori számvitel egyik alapvető kérdése, hogy a pénzügyi beszámolás címzettjét – az érdekhordozókat – miként lehet azonosítani. Ez a törekvés már a klasszikus, azóta meghaladottá vált elméletekben is központi szerepet töltött be és modern, posztmodern elméletekben kulcsfontosságúvá vált. A tapasztalatok alapján az azonosított érdekhordozók köre módosult, bővült. Ennek a fejlődésnek a vizsgálata során a számvitel számos olyan ismérvét sikerült azonosítani, amely segítségével a vonatkozó szabályok tökéletesíthetők. Emellett az evolúció vizsgálata segítségével közvetlenül is megfigyelhetővé vált az, hogy a számvitelt extern módon szabályozó hatalom szükségessége milyen feltételek teljesítése mellett igazolható. A vizsgálat során azonosíthatóvá váltak olyan helyzetek, amikor a számviteli szabályozó és „kívülről irányított” pénzügyi beszámolás szuboptimális helyzethez vezet. A cikk az érdekhordozói elméletek fejlődését a klasszikus felfogásoktól indulva mutatja be. Feltárja, hogy a modern – jelenleg elfogadott – koalíciós vállalatfelfogás miben hozott újat, elsősorban miként hívta életre az extern szabályozót. _____ One of the key problems of the modern financial accounting is how to define the stakeholders. This problem was already a key issue in the already outdated classical stakeholder theories. Research and experience noted that the group of stakeholders has widened and has been modified. Through this evolution researchers identified many characteristics of financial reporting through which the regulation could have been improved. This advance pointed out which are the situations when the existence of an extern accounting regulator may be justified, since under given circumstances this existence led to suboptimal scenario. This paper deals with the stakeholder theories, starting with the classical ones. The article points out how did the currently accepted theory changed the assertions of the previous one and how was the external regulator created as an inevitable consequence. The paper also highlights the main issues raised by the post-modern theories; those, which try to fit the current questions into the current stakeholder models. The article also produces a Hungarian evidence for the previously mentioned suboptimal scenario, where the not tax-driven regulation proves to be suboptimal.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Ma thèse s’intéresse aux politiques de santé conçues pour encourager l’offre de services de santé. L’accessibilité aux services de santé est un problème majeur qui mine le système de santé de la plupart des pays industrialisés. Au Québec, le temps médian d’attente entre une recommandation du médecin généraliste et un rendez-vous avec un médecin spécialiste était de 7,3 semaines en 2012, contre 2,9 semaines en 1993, et ceci malgré l’augmentation du nombre de médecins sur cette même période. Pour les décideurs politiques observant l’augmentation du temps d’attente pour des soins de santé, il est important de comprendre la structure de l’offre de travail des médecins et comment celle-ci affecte l’offre des services de santé. Dans ce contexte, je considère deux principales politiques. En premier lieu, j’estime comment les médecins réagissent aux incitatifs monétaires et j’utilise les paramètres estimés pour examiner comment les politiques de compensation peuvent être utilisées pour déterminer l’offre de services de santé de court terme. En second lieu, j’examine comment la productivité des médecins est affectée par leur expérience, à travers le mécanisme du "learning-by-doing", et j’utilise les paramètres estimés pour trouver le nombre de médecins inexpérimentés que l’on doit recruter pour remplacer un médecin expérimenté qui va à la retraite afin de garder l’offre des services de santé constant. Ma thèse développe et applique des méthodes économique et statistique afin de mesurer la réaction des médecins face aux incitatifs monétaires et estimer leur profil de productivité (en mesurant la variation de la productivité des médecins tout le long de leur carrière) en utilisant à la fois des données de panel sur les médecins québécois, provenant d’enquêtes et de l’administration. Les données contiennent des informations sur l’offre de travail de chaque médecin, les différents types de services offerts ainsi que leurs prix. Ces données couvrent une période pendant laquelle le gouvernement du Québec a changé les prix relatifs des services de santé. J’ai utilisé une approche basée sur la modélisation pour développer et estimer un modèle structurel d’offre de travail en permettant au médecin d’être multitâche. Dans mon modèle les médecins choisissent le nombre d’heures travaillées ainsi que l’allocation de ces heures à travers les différents services offerts, de plus les prix des services leurs sont imposés par le gouvernement. Le modèle génère une équation de revenu qui dépend des heures travaillées et d’un indice de prix représentant le rendement marginal des heures travaillées lorsque celles-ci sont allouées de façon optimale à travers les différents services. L’indice de prix dépend des prix des services offerts et des paramètres de la technologie de production des services qui déterminent comment les médecins réagissent aux changements des prix relatifs. J’ai appliqué le modèle aux données de panel sur la rémunération des médecins au Québec fusionnées à celles sur l’utilisation du temps de ces mêmes médecins. J’utilise le modèle pour examiner deux dimensions de l’offre des services de santé. En premierlieu, j’analyse l’utilisation des incitatifs monétaires pour amener les médecins à modifier leur production des différents services. Bien que les études antérieures ont souvent cherché à comparer le comportement des médecins à travers les différents systèmes de compensation,il y a relativement peu d’informations sur comment les médecins réagissent aux changementsdes prix des services de santé. Des débats actuels dans les milieux de politiques de santé au Canada se sont intéressés à l’importance des effets de revenu dans la détermination de la réponse des médecins face à l’augmentation des prix des services de santé. Mon travail contribue à alimenter ce débat en identifiant et en estimant les effets de substitution et de revenu résultant des changements des prix relatifs des services de santé. En second lieu, j’analyse comment l’expérience affecte la productivité des médecins. Cela a une importante implication sur le recrutement des médecins afin de satisfaire la demande croissante due à une population vieillissante, en particulier lorsque les médecins les plus expérimentés (les plus productifs) vont à la retraite. Dans le premier essai, j’ai estimé la fonction de revenu conditionnellement aux heures travaillées, en utilisant la méthode des variables instrumentales afin de contrôler pour une éventuelle endogeneité des heures travaillées. Comme instruments j’ai utilisé les variables indicatrices des âges des médecins, le taux marginal de taxation, le rendement sur le marché boursier, le carré et le cube de ce rendement. Je montre que cela donne la borne inférieure de l’élasticité-prix direct, permettant ainsi de tester si les médecins réagissent aux incitatifs monétaires. Les résultats montrent que les bornes inférieures des élasticités-prix de l’offre de services sont significativement positives, suggérant que les médecins répondent aux incitatifs. Un changement des prix relatifs conduit les médecins à allouer plus d’heures de travail au service dont le prix a augmenté. Dans le deuxième essai, j’estime le modèle en entier, de façon inconditionnelle aux heures travaillées, en analysant les variations des heures travaillées par les médecins, le volume des services offerts et le revenu des médecins. Pour ce faire, j’ai utilisé l’estimateur de la méthode des moments simulés. Les résultats montrent que les élasticités-prix direct de substitution sont élevées et significativement positives, représentant une tendance des médecins à accroitre le volume du service dont le prix a connu la plus forte augmentation. Les élasticitésprix croisées de substitution sont également élevées mais négatives. Par ailleurs, il existe un effet de revenu associé à l’augmentation des tarifs. J’ai utilisé les paramètres estimés du modèle structurel pour simuler une hausse générale de prix des services de 32%. Les résultats montrent que les médecins devraient réduire le nombre total d’heures travaillées (élasticité moyenne de -0,02) ainsi que les heures cliniques travaillées (élasticité moyenne de -0.07). Ils devraient aussi réduire le volume de services offerts (élasticité moyenne de -0.05). Troisièmement, j’ai exploité le lien naturel existant entre le revenu d’un médecin payé à l’acte et sa productivité afin d’établir le profil de productivité des médecins. Pour ce faire, j’ai modifié la spécification du modèle pour prendre en compte la relation entre la productivité d’un médecin et son expérience. J’estime l’équation de revenu en utilisant des données de panel asymétrique et en corrigeant le caractère non-aléatoire des observations manquantes à l’aide d’un modèle de sélection. Les résultats suggèrent que le profil de productivité est une fonction croissante et concave de l’expérience. Par ailleurs, ce profil est robuste à l’utilisation de l’expérience effective (la quantité de service produit) comme variable de contrôle et aussi à la suppression d’hypothèse paramétrique. De plus, si l’expérience du médecin augmente d’une année, il augmente la production de services de 1003 dollar CAN. J’ai utilisé les paramètres estimés du modèle pour calculer le ratio de remplacement : le nombre de médecins inexpérimentés qu’il faut pour remplacer un médecin expérimenté. Ce ratio de remplacement est de 1,2.
Resumo:
Resource allocation decisions are made to serve the current emergency without knowing which future emergency will be occurring. Different ordered combinations of emergencies result in different performance outcomes. Even though future decisions can be anticipated with scenarios, previous models follow an assumption that events over a time interval are independent. This dissertation follows an assumption that events are interdependent, because speed reduction and rubbernecking due to an initial incident provoke secondary incidents. The misconception that secondary incidents are not common has resulted in overlooking a look-ahead concept. This dissertation is a pioneer in relaxing the structural assumptions of independency during the assignment of emergency vehicles. When an emergency is detected and a request arrives, an appropriate emergency vehicle is immediately dispatched. We provide tools for quantifying impacts based on fundamentals of incident occurrences through identification, prediction, and interpretation of secondary incidents. A proposed online dispatching model minimizes the cost of moving the next emergency unit, while making the response as close to optimal as possible. Using the look-ahead concept, the online model flexibly re-computes the solution, basing future decisions on present requests. We introduce various online dispatching strategies with visualization of the algorithms, and provide insights on their differences in behavior and solution quality. The experimental evidence indicates that the algorithm works well in practice. After having served a designated request, the available and/or remaining vehicles are relocated to a new base for the next emergency. System costs will be excessive if delay regarding dispatching decisions is ignored when relocating response units. This dissertation presents an integrated method with a principle of beginning with a location phase to manage initial incidents and progressing through a dispatching phase to manage the stochastic occurrence of next incidents. Previous studies used the frequency of independent incidents and ignored scenarios in which two incidents occurred within proximal regions and intervals. The proposed analytical model relaxes the structural assumptions of Poisson process (independent increments) and incorporates evolution of primary and secondary incident probabilities over time. The mathematical model overcomes several limiting assumptions of the previous models, such as no waiting-time, returning rule to original depot, and fixed depot. The temporal locations flexible with look-ahead are compared with current practice that locates units in depots based on Poisson theory. A linearization of the formulation is presented and an efficient heuristic algorithm is implemented to deal with a large-scale problem in real-time.
Resumo:
During its history, several significant earthquakes have shaken the Lower Tagus Valley (Portugal). These earthquakes were destructive; some strong earthquakes were produced by large ruptures in offshore structures located southwest of the Portuguese coastline, and other moderate earthquakes were produced by local faults. In recent years, several studies have successfully obtained strong-ground motion syntheses for the Lower Tagus Valley using the finite difference method. To confirm the velocity model of this sedimentary basin obtained from geophysical and geological data, we analysed the ambient seismic noise measurements by applying the horizontal to vertical spectral ratio (HVSR) method. This study reveals the dependence of the frequency and amplitude of the low-frequency (HVSR) peaks (0.2–2 Hz) on the sediment thickness. We have obtained the depth of the Cenozoic basement along a profile transversal to the basin by the inversion of these ratios, imposing constraints from seismic reflection, boreholes, seismic sounding and gravimetric and magnetic potentials. This technique enables us to improve the existing three-dimensional model of the Lower Tagus Valley structure. The improved model will be decisive for the improvement of strong motion predictions in the earthquake hazard analysis of this highly populated basin. The methodology discussed can be applied to any other sedimentary basin.
Resumo:
The validation of Computed Tomography (CT) based 3D models takes an integral part in studies involving 3D models of bones. This is of particular importance when such models are used for Finite Element studies. The validation of 3D models typically involves the generation of a reference model representing the bones outer surface. Several different devices have been utilised for digitising a bone’s outer surface such as mechanical 3D digitising arms, mechanical 3D contact scanners, electro-magnetic tracking devices and 3D laser scanners. However, none of these devices is capable of digitising a bone’s internal surfaces, such as the medullary canal of a long bone. Therefore, this study investigated the use of a 3D contact scanner, in conjunction with a microCT scanner, for generating a reference standard for validating the internal and external surfaces of a CT based 3D model of an ovine femur. One fresh ovine limb was scanned using a clinical CT scanner (Phillips, Brilliance 64) with a pixel size of 0.4 mm2 and slice spacing of 0.5 mm. Then the limb was dissected to obtain the soft tissue free bone while care was taken to protect the bone’s surface. A desktop mechanical 3D contact scanner (Roland DG Corporation, MDX 20, Japan) was used to digitise the surface of the denuded bone. The scanner was used with the resolution of 0.3 × 0.3 × 0.025 mm. The digitised surfaces were reconstructed into a 3D model using reverse engineering techniques in Rapidform (Inus Technology, Korea). After digitisation, the distal and proximal parts of the bone were removed such that the shaft could be scanned with a microCT (µCT40, Scanco Medical, Switzerland) scanner. The shaft, with the bone marrow removed, was immersed in water and scanned with a voxel size of 0.03 mm3. The bone contours were extracted from the image data utilising the Canny edge filter in Matlab (The Mathswork).. The extracted bone contours were reconstructed into 3D models using Amira 5.1 (Visage Imaging, Germany). The 3D models of the bone’s outer surface reconstructed from CT and microCT data were compared against the 3D model generated using the contact scanner. The 3D model of the inner canal reconstructed from the microCT data was compared against the 3D models reconstructed from the clinical CT scanner data. The disparity between the surface geometries of two models was calculated in Rapidform and recorded as average distance with standard deviation. The comparison of the 3D model of the whole bone generated from the clinical CT data with the reference model generated a mean error of 0.19±0.16 mm while the shaft was more accurate(0.08±0.06 mm) than the proximal (0.26±0.18 mm) and distal (0.22±0.16 mm) parts. The comparison between the outer 3D model generated from the microCT data and the contact scanner model generated a mean error of 0.10±0.03 mm indicating that the microCT generated models are sufficiently accurate for validation of 3D models generated from other methods. The comparison of the inner models generated from microCT data with that of clinical CT data generated an error of 0.09±0.07 mm Utilising a mechanical contact scanner in conjunction with a microCT scanner enabled to validate the outer surface of a CT based 3D model of an ovine femur as well as the surface of the model’s medullary canal.
Resumo:
Any theory of thinking or teaching or learning rests on an underlying philosophy of knowledge. Mathematics education is situated at the nexus of two fields of inquiry, namely mathematics and education. However, numerous other disciplines interact with these two fields which compound the complexity of developing theories that define mathematics education. We first address the issue of clarifying a philosophy of mathematics education before attempting to answer whether theories of mathematics education are constructible? In doing so we draw on the foundational writings of Lincoln and Guba (1994), in which they clearly posit that any discipline within education, in our case mathematics education, needs to clarify for itself the following questions: (1) What is reality? Or what is the nature of the world around us? (2) How do we go about knowing the world around us? [the methodological question, which presents possibilities to various disciplines to develop methodological paradigms] and, (3) How can we be certain in the “truth” of what we know? [the epistemological question]