964 resultados para Two dimensions
Resumo:
Each disaster presents itself with a unique set of characteristics that are hard to determine a priori. Thus disaster management tasks are inherently uncertain, requiring knowledge sharing and quick decision making that involves coordination across different levels and collaborators. While there has been an increasing interest among both researchers and practitioners in utilizing knowledge management to improve disaster management, little research has been reported about how to assess the dynamic nature of disaster management tasks, and what kinds of knowledge sharing are appropriate for different dimensions of task uncertainty characteristics. ^ Using combinations of qualitative and quantitative methods, this research study developed the dimensions and their corresponding measures of the uncertain dynamic characteristics of disaster management tasks and tested the relationships between the various dimensions of uncertain dynamic disaster management tasks and task performance through the moderating and mediating effects of knowledge sharing. ^ Furthermore, this research work conceptualized and assessed task uncertainty along three dimensions: novelty, unanalyzability, and significance; knowledge sharing along two dimensions: knowledge sharing purposes and knowledge sharing mechanisms; and task performance along two dimensions: task effectiveness and task efficiency. Analysis results of survey data collected from Miami-Dade County emergency managers suggested that knowledge sharing purposes and knowledge sharing mechanisms moderate and mediate uncertain dynamic disaster management task and task performance. Implications for research and practice as well directions for future research are discussed.^
Resumo:
A novel modeling approach is applied to karst hydrology. Long-standing problems in karst hydrology and solute transport are addressed using Lattice Boltzmann methods (LBMs). These methods contrast with other modeling approaches that have been applied to karst hydrology. The motivation of this dissertation is to develop new computational models for solving ground water hydraulics and transport problems in karst aquifers, which are widespread around the globe. This research tests the viability of the LBM as a robust alternative numerical technique for solving large-scale hydrological problems. The LB models applied in this research are briefly reviewed and there is a discussion of implementation issues. The dissertation focuses on testing the LB models. The LBM is tested for two different types of inlet boundary conditions for solute transport in finite and effectively semi-infinite domains. The LBM solutions are verified against analytical solutions. Zero-diffusion transport and Taylor dispersion in slits are also simulated and compared against analytical solutions. These results demonstrate the LBM’s flexibility as a solute transport solver. The LBM is applied to simulate solute transport and fluid flow in porous media traversed by larger conduits. A LBM-based macroscopic flow solver (Darcy’s law-based) is linked with an anisotropic dispersion solver. Spatial breakthrough curves in one and two dimensions are fitted against the available analytical solutions. This provides a steady flow model with capabilities routinely found in ground water flow and transport models (e.g., the combination of MODFLOW and MT3D). However the new LBM-based model retains the ability to solve inertial flows that are characteristic of karst aquifer conduits. Transient flows in a confined aquifer are solved using two different LBM approaches. The analogy between Fick’s second law (diffusion equation) and the transient ground water flow equation is used to solve the transient head distribution. An altered-velocity flow solver with source/sink term is applied to simulate a drawdown curve. Hydraulic parameters like transmissivity and storage coefficient are linked with LB parameters. These capabilities complete the LBM’s effective treatment of the types of processes that are simulated by standard ground water models. The LB model is verified against field data for drawdown in a confined aquifer.
Resumo:
The theoretical construct of control has been defined as necessary (Etzioni, 1965), ubiquitous (Vickers, 1967), and on-going (E. Langer, 1983). Empirical measures, however, have not adequately given meaning to this potent construct, especially within complex organizations such as schools. Four stages of theory-development and empirical testing of school building managerial control using principals and teachers working within the nation's fourth largest district are presented in this dissertation as follows: (1) a review and synthesis of social science theories of control across the literatures of organizational theory, political science, sociology, psychology, and philosophy; (2) a systematic analysis of school managerial activities performed at the building level within the context of curricular and instructional tasks; (3) the development of a survey questionnaire to measure school building managerial control; and (4) initial tests of construct validity including inter-item reliability statistics, principal components analyses, and multivariate tests of significance. The social science synthesis provided support of four managerial control processes: standards, information, assessment, and incentives. The systematic analysis of school managerial activities led to further categorization between structural frequency of behaviors and discretionary qualities of behaviors across each of the control processes and the curricular and instructional tasks. Teacher survey responses (N=486) reported a significant difference between these two dimensions of control, structural frequency and discretionary qualities, for standards, information, and assessments, but not for incentives. The descriptive model of school managerial control suggests that (1) teachers perceive structural and discretionary managerial behaviors under information and incentives more clearly than activities representing standards or assessments, (2) standards are primarily structural while assessments are primarily qualitative, (3) teacher satisfaction is most closely related to the equitable distribution of incentives, (4) each of the structural managerial behaviors has a qualitative effect on teachers, and that (5) certain qualities of managerial behaviors are perceived by teachers as distinctly discretionary, apart from school structure. The variables of teacher tenure and school effectiveness reported significant effects on school managerial control processes, while instructional levels (elementary, junior, and senior) and individual school differences were not found to be significant for the construct of school managerial control.
Resumo:
This thesis presents a certification method for semantic web services compositions which aims to statically ensure its functional correctness. Certification method encompasses two dimensions of verification, termed base and functional dimensions. Base dimension concerns with the verification of application correctness of the semantic web service in the composition, i.e., to ensure that each service invocation given in the composition comply with its respective service definition. The certification of this dimension exploits the semantic compatibility between the invocation arguments and formal parameters of the semantic web service. Functional dimension aims to ensure that the composition satisfies a given specification expressed in the form of preconditions and postconditions. This dimension is formalized by a Hoare logic based calculus. Partial correctness specifications involving compositions of semantic web services can be derived from the deductive system proposed. Our work is also characterized by exploiting the use of a fragment of description logic, i.e., ALC, to express the partial correctness specifications. In order to operationalize the proposed certification method, we developed a supporting environment for defining the semantic web services compositions as well as to conduct the certification process. The certification method were experimentally evaluated by applying it in three different proof concepts. These proof concepts enabled to broadly evaluate the method certification
Resumo:
Projeto de Investigação apresentado para a obtenção do grau de Mestre em Psicologia do Desporto e do Exercício
Resumo:
Although trapped ion technology is well-suited for quantum information science, scalability of the system remains one of the main challenges. One of the challenges associated with scaling the ion trap quantum computer is the ability to individually manipulate the increasing number of qubits. Using micro-mirrors fabricated with micro-electromechanical systems (MEMS) technology, laser beams are focused on individual ions in a linear chain and steer the focal point in two dimensions. Multiple single qubit gates are demonstrated on trapped 171Yb+ qubits and the gate performance is characterized using quantum state tomography. The system features negligible crosstalk to neighboring ions (< 3e-4), and switching speeds comparable to typical single qubit gate times (< 2 us). In a separate experiment, photons scattered from the 171Yb+ ion are coupled into an optical fiber with 63% efficiency using a high numerical aperture lens (0.6 NA). The coupled photons are directed to superconducting nanowire single photon detectors (SNSPD), which provide a higher detector efficiency (69%) compared to traditional photomultiplier tubes (35%). The total system photon collection efficiency is increased from 2.2% to 3.4%, which allows for fast state detection of the qubit. For a detection beam intensity of 11 mW/cm2, the average detection time is 23.7 us with 99.885(7)% detection fidelity. The technologies demonstrated in this thesis can be integrated to form a single quantum register with all of the necessary resources to perform local gates as well as high fidelity readout and provide a photon link to other systems.
Resumo:
Modern software applications are becoming more dependent on database management systems (DBMSs). DBMSs are usually used as black boxes by software developers. For example, Object-Relational Mapping (ORM) is one of the most popular database abstraction approaches that developers use nowadays. Using ORM, objects in Object-Oriented languages are mapped to records in the database, and object manipulations are automatically translated to SQL queries. As a result of such conceptual abstraction, developers do not need deep knowledge of databases; however, all too often this abstraction leads to inefficient and incorrect database access code. Thus, this thesis proposes a series of approaches to improve the performance of database-centric software applications that are implemented using ORM. Our approaches focus on troubleshooting and detecting inefficient (i.e., performance problems) database accesses in the source code, and we rank the detected problems based on their severity. We first conduct an empirical study on the maintenance of ORM code in both open source and industrial applications. We find that ORM performance-related configurations are rarely tuned in practice, and there is a need for tools that can help improve/tune the performance of ORM-based applications. Thus, we propose approaches along two dimensions to help developers improve the performance of ORM-based applications: 1) helping developers write more performant ORM code; and 2) helping developers configure ORM configurations. To provide tooling support to developers, we first propose static analysis approaches to detect performance anti-patterns in the source code. We automatically rank the detected anti-pattern instances according to their performance impacts. Our study finds that by resolving the detected anti-patterns, the application performance can be improved by 34% on average. We then discuss our experience and lessons learned when integrating our anti-pattern detection tool into industrial practice. We hope our experience can help improve the industrial adoption of future research tools. However, as static analysis approaches are prone to false positives and lack runtime information, we also propose dynamic analysis approaches to further help developers improve the performance of their database access code. We propose automated approaches to detect redundant data access anti-patterns in the database access code, and our study finds that resolving such redundant data access anti-patterns can improve application performance by an average of 17%. Finally, we propose an automated approach to tune performance-related ORM configurations using both static and dynamic analysis. Our study shows that our approach can help improve application throughput by 27--138%. Through our case studies on real-world applications, we show that all of our proposed approaches can provide valuable support to developers and help improve application performance significantly.
Resumo:
In this article we analyze the Debate on the State of the Nation 2014. The methodology consists in coding the speeches of the prime minister, Mariano Rajoy (PP) and the then opposition leader Alfredo Perez Rubalcaba (PSOE) through extracting word clouds, branched maps and word trees that have shown the most common concepts and premises. This preliminary analysis of two dimensions, quantitative and qualitative, makes it much easier and viable subsequent discourse analysis where we focus on the different types of arguments in the communicative act: claim/solution, circumstantial premises, goal premises, value premises, meansgoal premises, alternative options/addressing alternative options.
Resumo:
The postwar development of the Intelligence Services in Japan has been based on two contrasting models: the centralized model of the USA and the collegiality of UK, neither of which has been fully developed. This has led to clashes of institutional competencies and poor anticipation of threats towards national security. This problem of opposing models has been partially overcome through two dimensions: externally through the cooperation with the US Intelligence Service under the Treaty of Mutual Cooperation and Security; and internally though the pre-eminence in the national sphere of the Department of Public Safety. However, the emergence of a new global communicative dimension requires that a communicative-viewing remodeling of this dual model is necessary due to the increasing capacity of the individual actors to determine the dynamics of international events. This article examines these challenges for the Intelligence Services of Japan and proposes a reform based on this new global communicative dimension.
Resumo:
Em oposição à situação predominante da heterogeneidade da maioria dos países africanos, cuja sociedade compreende a existência de inúmeros grupos étnicos ou diferentes religiões e culturas, Cabo Verde é definido como um Estado-Nação que reconhece uma identidade coletiva, traduzida na língua e na identificação de elementos culturais comuns pertencentes a um mesmo espaço arquipelágico. O ponto de partida para este estudo assenta na preocupação em se compreender a noção de Nação neste país, que sugere a ideia de uma sociedade onde os fatores homogéneos predominam sobre os heterogéneos ou, em último caso, que se trata de um mecanismo de coabitação entre estas duas dimensões que poderão inicialmente parecer antagónicas, mas que prestam um especial sentido ao debate acerca da identidade nacional. Ao longo deste artigo, destacaremos tantos os factos históricos, bem como o contributo dos principais movimentos culturais, sem ignorar a importância da tomada de consciência no que se refere à ideia de Nação.
Resumo:
For the current study, the authors examined the relationships among two dimensions of organizational climate and several indices of individual- and unit-level effectiveness. Specifically, the article proposes that an organization ’s service and training climate would be related to employee capabilities—operationalized in terms of frontline service capabilities and managerial support capabilities—and that such capabilities would be related to unit- level measures of employee turnover and sales growth. Using survey and operational data from 201 management and frontline staff members in 22 units of a national restaurant chain, the results from correlation and regression analyses generally supported the proposed relationships. This study replicates and extends previous research and provides a foundation for future conceptual development and empirical work in this research area.
Resumo:
Ma thèse s’intéresse aux politiques de santé conçues pour encourager l’offre de services de santé. L’accessibilité aux services de santé est un problème majeur qui mine le système de santé de la plupart des pays industrialisés. Au Québec, le temps médian d’attente entre une recommandation du médecin généraliste et un rendez-vous avec un médecin spécialiste était de 7,3 semaines en 2012, contre 2,9 semaines en 1993, et ceci malgré l’augmentation du nombre de médecins sur cette même période. Pour les décideurs politiques observant l’augmentation du temps d’attente pour des soins de santé, il est important de comprendre la structure de l’offre de travail des médecins et comment celle-ci affecte l’offre des services de santé. Dans ce contexte, je considère deux principales politiques. En premier lieu, j’estime comment les médecins réagissent aux incitatifs monétaires et j’utilise les paramètres estimés pour examiner comment les politiques de compensation peuvent être utilisées pour déterminer l’offre de services de santé de court terme. En second lieu, j’examine comment la productivité des médecins est affectée par leur expérience, à travers le mécanisme du "learning-by-doing", et j’utilise les paramètres estimés pour trouver le nombre de médecins inexpérimentés que l’on doit recruter pour remplacer un médecin expérimenté qui va à la retraite afin de garder l’offre des services de santé constant. Ma thèse développe et applique des méthodes économique et statistique afin de mesurer la réaction des médecins face aux incitatifs monétaires et estimer leur profil de productivité (en mesurant la variation de la productivité des médecins tout le long de leur carrière) en utilisant à la fois des données de panel sur les médecins québécois, provenant d’enquêtes et de l’administration. Les données contiennent des informations sur l’offre de travail de chaque médecin, les différents types de services offerts ainsi que leurs prix. Ces données couvrent une période pendant laquelle le gouvernement du Québec a changé les prix relatifs des services de santé. J’ai utilisé une approche basée sur la modélisation pour développer et estimer un modèle structurel d’offre de travail en permettant au médecin d’être multitâche. Dans mon modèle les médecins choisissent le nombre d’heures travaillées ainsi que l’allocation de ces heures à travers les différents services offerts, de plus les prix des services leurs sont imposés par le gouvernement. Le modèle génère une équation de revenu qui dépend des heures travaillées et d’un indice de prix représentant le rendement marginal des heures travaillées lorsque celles-ci sont allouées de façon optimale à travers les différents services. L’indice de prix dépend des prix des services offerts et des paramètres de la technologie de production des services qui déterminent comment les médecins réagissent aux changements des prix relatifs. J’ai appliqué le modèle aux données de panel sur la rémunération des médecins au Québec fusionnées à celles sur l’utilisation du temps de ces mêmes médecins. J’utilise le modèle pour examiner deux dimensions de l’offre des services de santé. En premierlieu, j’analyse l’utilisation des incitatifs monétaires pour amener les médecins à modifier leur production des différents services. Bien que les études antérieures ont souvent cherché à comparer le comportement des médecins à travers les différents systèmes de compensation,il y a relativement peu d’informations sur comment les médecins réagissent aux changementsdes prix des services de santé. Des débats actuels dans les milieux de politiques de santé au Canada se sont intéressés à l’importance des effets de revenu dans la détermination de la réponse des médecins face à l’augmentation des prix des services de santé. Mon travail contribue à alimenter ce débat en identifiant et en estimant les effets de substitution et de revenu résultant des changements des prix relatifs des services de santé. En second lieu, j’analyse comment l’expérience affecte la productivité des médecins. Cela a une importante implication sur le recrutement des médecins afin de satisfaire la demande croissante due à une population vieillissante, en particulier lorsque les médecins les plus expérimentés (les plus productifs) vont à la retraite. Dans le premier essai, j’ai estimé la fonction de revenu conditionnellement aux heures travaillées, en utilisant la méthode des variables instrumentales afin de contrôler pour une éventuelle endogeneité des heures travaillées. Comme instruments j’ai utilisé les variables indicatrices des âges des médecins, le taux marginal de taxation, le rendement sur le marché boursier, le carré et le cube de ce rendement. Je montre que cela donne la borne inférieure de l’élasticité-prix direct, permettant ainsi de tester si les médecins réagissent aux incitatifs monétaires. Les résultats montrent que les bornes inférieures des élasticités-prix de l’offre de services sont significativement positives, suggérant que les médecins répondent aux incitatifs. Un changement des prix relatifs conduit les médecins à allouer plus d’heures de travail au service dont le prix a augmenté. Dans le deuxième essai, j’estime le modèle en entier, de façon inconditionnelle aux heures travaillées, en analysant les variations des heures travaillées par les médecins, le volume des services offerts et le revenu des médecins. Pour ce faire, j’ai utilisé l’estimateur de la méthode des moments simulés. Les résultats montrent que les élasticités-prix direct de substitution sont élevées et significativement positives, représentant une tendance des médecins à accroitre le volume du service dont le prix a connu la plus forte augmentation. Les élasticitésprix croisées de substitution sont également élevées mais négatives. Par ailleurs, il existe un effet de revenu associé à l’augmentation des tarifs. J’ai utilisé les paramètres estimés du modèle structurel pour simuler une hausse générale de prix des services de 32%. Les résultats montrent que les médecins devraient réduire le nombre total d’heures travaillées (élasticité moyenne de -0,02) ainsi que les heures cliniques travaillées (élasticité moyenne de -0.07). Ils devraient aussi réduire le volume de services offerts (élasticité moyenne de -0.05). Troisièmement, j’ai exploité le lien naturel existant entre le revenu d’un médecin payé à l’acte et sa productivité afin d’établir le profil de productivité des médecins. Pour ce faire, j’ai modifié la spécification du modèle pour prendre en compte la relation entre la productivité d’un médecin et son expérience. J’estime l’équation de revenu en utilisant des données de panel asymétrique et en corrigeant le caractère non-aléatoire des observations manquantes à l’aide d’un modèle de sélection. Les résultats suggèrent que le profil de productivité est une fonction croissante et concave de l’expérience. Par ailleurs, ce profil est robuste à l’utilisation de l’expérience effective (la quantité de service produit) comme variable de contrôle et aussi à la suppression d’hypothèse paramétrique. De plus, si l’expérience du médecin augmente d’une année, il augmente la production de services de 1003 dollar CAN. J’ai utilisé les paramètres estimés du modèle pour calculer le ratio de remplacement : le nombre de médecins inexpérimentés qu’il faut pour remplacer un médecin expérimenté. Ce ratio de remplacement est de 1,2.
Resumo:
Cette recherche propose une analyse critique du droit applicable à deux dimensions de la gestion des risques : l’indemnisation des dommages au moyen de l’assurance et la prévention des risques au moyen du principe de précaution. Dans une perspective interdisciplinaire, l’interaction du droit avec les rationalités politique, économique, scientifique et sociale est soulignée par l’opposition de deux théories : la Société du risque d’Ulrich Beck et la Société assurantielle de François Ewald. L’argumentation révèle les différentes significations de la limite de l’assurance privée en droit et les discours dissonants quant aux stratégies juridiques utilisées face aux risques. Le mémoire fournit ainsi des balises essentielles à la réflexion juridique critique. L’originalité de l’angle d’analyse, qui tient compte de l’évolution du droit en lien avec la modification de la rationalité politique survenue au XIXe siècle avec l’industrialisation des sociétés occidentales, permet d’enrichir l’épistémologie juridique. Il en découle entre autres une réflexion au sujet de l’évolution des conceptions théoriques du droit et de son rôle social escompté.
Resumo:
The power of computer game technology is currently being harnessed to produce “serious games”. These “games” are targeted at the education and training marketplace, and employ various key game-engine components such as the graphics and physics engines to produce realistic “digital-world” simulations of the real “physical world”. Many approaches are driven by the technology and often lack a consideration of a firm pedagogical underpinning. The authors believe that an analysis and deployment of both the technological and pedagogical dimensions should occur together, with the pedagogical dimension providing the lead. This chapter explores the relationship between these two dimensions, and explores how “pedagogy may inform the use of technology”, how various learning theories may be mapped onto the use of the affordances of computer game engines. Autonomous and collaborative learning approaches are discussed. The design of a serious game is broken down into spatial and temporal elements. The spatial dimension is related to the theories of knowledge structures, especially “concept maps”. The temporal dimension is related to “experiential learning”, especially the approach of Kolb. The multi-player aspect of serious games is related to theories of “collaborative learning” which is broken down into a discussion of “discourse” versus “dialogue”. Several general guiding principles are explored, such as the use of “metaphor” (including metaphors of space, embodiment, systems thinking, the internet and emergence). The topological design of a serious game is also highlighted. The discussion of pedagogy is related to various serious games we have recently produced and researched, and is presented in the hope of informing the “serious game community”.
Resumo:
Coupled map lattices (CML) can describe many relaxation and optimization algorithms currently used in image processing. We recently introduced the ‘‘plastic‐CML’’ as a paradigm to extract (segment) objects in an image. Here, the image is applied by a set of forces to a metal sheet which is allowed to undergo plastic deformation parallel to the applied forces. In this paper we present an analysis of our ‘‘plastic‐CML’’ in one and two dimensions, deriving the nature and stability of its stationary solutions. We also detail how to use the CML in image processing, how to set the system parameters and present examples of it at work. We conclude that the plastic‐CML is able to segment images with large amounts of noise and large dynamic range of pixel values, and is suitable for a very large scale integration(VLSI) implementation.